MT2175: Diagonalisation, Jordan Form & Differential Equations

Question 1

What is the general form of a square linear system of differential equations, and how can it be represented in matrix form?

Answer

A general square linear system of differential equations for functions \(y_1(t), y_2(t), ..., y_n(t)\) has the form:

$$ y'_1 = a_{11}y_1 + a_{12}y_2 + \cdots + a_{1n}y_n $$ $$ y'_2 = a_{21}y_1 + a_{22}y_2 + \cdots + a_{2n}y_n $$ $$ \vdots $$ $$ y'_n = a_{n1}y_1 + a_{n2}y_2 + \cdots + a_{nn}y_n $$

where the \(a_{ij}\) are constants.

This can be represented in matrix form as \( \mathbf{y}' = A\mathbf{y} \), where:

  • \(\mathbf{y}'\) is the vector of derivatives: \((\frac{dy_1}{dt}, ..., \frac{dy_n}{dt})^T\)
  • \(A\) is the \(n \times n\) matrix of coefficients \((a_{ij})\)
  • \(\mathbf{y}\) is the vector of functions: \((y_1, ..., y_n)^T\)
Source: 1_MT2175.pdf, p. 16

Question 2

If a system of differential equations is given by \(\mathbf{y}' = A\mathbf{y}\) and the matrix \(A\) is a diagonal matrix, \(A = \text{diag}(\lambda_1, \lambda_2, ..., \lambda_n)\), what is the general solution for each function \(y_i(t)\)?

Answer

If the matrix \(A\) is diagonal, the system of equations is "uncoupled", meaning each equation can be solved independently.

The system becomes:

$$ y'_1 = \lambda_1 y_1, \quad y'_2 = \lambda_2 y_2, \quad ..., \quad y'_n = \lambda_n y_n $$

Each of these is a simple first-order linear differential equation. The solution for each function \(y_i(t)\) is given by:

$$ y_i(t) = y_i(0)e^{\lambda_i t} $$

where \(y_i(0)\) is the initial condition for that function.

Source: 1_MT2175.pdf, p. 16

Question 3

What is the core idea behind using diagonalisation to solve the system \(\mathbf{y}' = A\mathbf{y}\) when \(A\) is a diagonalisable matrix?

Answer

The core idea is to perform a **change of variable** to transform the original, coupled system into a new, uncoupled system that is easy to solve.

If \(A\) is diagonalisable, we can write \(D = P^{-1}AP\). We define a new vector of functions \(\mathbf{z}\) such that \(\mathbf{y} = P\mathbf{z}\). By substituting this into the original equation, we get:

$$ (P\mathbf{z})' = A(P\mathbf{z}) \implies P\mathbf{z}' = AP\mathbf{z} \implies \mathbf{z}' = P^{-1}AP\mathbf{z} $$

This simplifies to \(\mathbf{z}' = D\mathbf{z}\), which is a simple, uncoupled diagonal system. Once \(\mathbf{z}\) is found, we can find \(\mathbf{y}\) by transforming back using \(\mathbf{y} = P\mathbf{z}\).

Source: 1_MT2175.pdf, pp. 17-18

Question 4

When solving \(\mathbf{y}' = A\mathbf{y}\) using the change of variable \(\mathbf{y} = P\mathbf{z}\), how are the initial conditions for \(\mathbf{z}(t)\) related to the initial conditions for \(\mathbf{y}(t)\)?

Answer

The relationship \(\mathbf{y}(t) = P\mathbf{z}(t)\) holds for all \(t\), including \(t=0\). Therefore, the initial condition vector \(\mathbf{y}(0)\) is related to the initial condition vector \(\mathbf{z}(0)\) by the same transformation:

$$ \mathbf{y}(0) = P\mathbf{z}(0) $$

To find the initial conditions for the new system, \(\mathbf{z}(0)\), we simply solve this matrix equation:

$$ \mathbf{z}(0) = P^{-1}\mathbf{y}(0) $$
Source: 1_MT2175.pdf, p. 19

Question 5

What is a Jordan block?

Answer

A Jordan block is a square matrix with a specific structure. A \(k \times k\) matrix \(B\) is a Jordan block if it has the same value \(\lambda\) on the main diagonal, 1s on the superdiagonal (the diagonal directly above the main one), and 0s everywhere else.

For \(k \ge 2\), the structure is:

$$ B = \begin{pmatrix} \lambda & 1 & 0 & \cdots & 0 \\ 0 & \lambda & 1 & \cdots & 0 \\ \vdots & & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & \lambda & 1 \\ 0 & \cdots & 0 & 0 & \lambda \end{pmatrix} $$
Source: 1_MT2175.pdf, p. 23

Question 6

What is a Jordan matrix, and what is the Jordan normal form of a matrix A?

Answer

A **Jordan matrix** is a block diagonal matrix where each diagonal block is a Jordan block.

$$ J = \begin{pmatrix} B_1 & & \\ & \ddots & \\ & & B_r \end{pmatrix} $$

The **Jordan normal form (JNF)** of a square matrix \(A\) is a Jordan matrix \(J\) that is similar to \(A\). This means there exists an invertible matrix \(P\) such that:

$$ J = P^{-1}AP $$

The Jordan Normal Form Theorem states that every square matrix has a Jordan normal form, which is unique up to the ordering of the Jordan blocks.

Source: 1_MT2175.pdf, pp. 21-23

Question 7

Why is the Jordan normal form useful for solving systems of differential equations \(\mathbf{y}' = A\mathbf{y}\)?

Answer

Not all matrices are diagonalisable. For any square matrix \(A\), we can find its Jordan form \(J = P^{-1}AP\), which is "almost diagonal". Using the change of variable \(\mathbf{y} = P\mathbf{z}\), we transform the system \(\mathbf{y}' = A\mathbf{y}\) into:

$$ \mathbf{z}' = J\mathbf{z} $$

This new system is not completely uncoupled, but it is "almost uncoupled". The equations corresponding to each Jordan block can be solved sequentially, starting from the last equation in the block and working backwards, which is much simpler than solving the original fully coupled system.

Source: 1_MT2175.pdf, p. 25

Question 8

Consider the system \(\mathbf{z}' = J\mathbf{z}\) where \(J\) is a single \(3 \times 3\) Jordan block with eigenvalue \(\lambda\). Write out the system of equations for \(z_1, z_2, z_3\).

Answer

A single \(3 \times 3\) Jordan block \(J\) with eigenvalue \(\lambda\) is: $$ J = \begin{pmatrix} \lambda & 1 & 0 \\ 0 & \lambda & 1 \\ 0 & 0 & \lambda \end{pmatrix} $$ The system \(\mathbf{z}' = J\mathbf{z}\) is therefore:

$$ z'_1 = \lambda z_1 + z_2 $$ $$ z'_2 = \lambda z_2 + z_3 $$ $$ z'_3 = \lambda z_3 $$

This system can be solved by back substitution, starting with the equation for \(z_3\).

Source: 1_MT2175.pdf, p. 26

Question 9

State Theorem 2.2 from the subject guide, which gives the general solution to a system of differential equations \(\mathbf{w}' = B\mathbf{w}\) where \(B\) is a single \(k \times k\) Jordan block.

Answer

The theorem states that the general solution to \(\mathbf{w}' = B\mathbf{w}\) is given by:

$$ w_k(t) = c_k e^{\lambda t} $$ $$ w_{k-1}(t) = c_{k-1}e^{\lambda t} + c_k t e^{\lambda t} $$

And in general, for \(j = 1, ..., k\):

$$ w_j(t) = e^{\lambda t} \left( c_j + c_{j+1}t + c_{j+2}\frac{t^2}{2!} + \cdots + c_k \frac{t^{k-j}}{(k-j)!} \right) $$

where \(c_j\) are arbitrary constants.

Source: 1_MT2175.pdf, p. 28

Question 10

What is a generalised eigenvector?

Answer

A non-zero vector \(\mathbf{v}\) is a **generalised eigenvector** of a matrix \(A\) corresponding to an eigenvalue \(\lambda\) if for some positive integer \(k\), it satisfies:

$$ (A - \lambda I)^k \mathbf{v} = \mathbf{0} \quad \text{but} \quad (A - \lambda I)^{k-1} \mathbf{v} \neq \mathbf{0} $$

An ordinary eigenvector is a special case where \(k=1\). The columns of the matrix \(P\) that transforms \(A\) into its Jordan normal form \(J = P^{-1}AP\) form a basis of generalised eigenvectors for \(A\).

Source: 1_MT2175.pdf, p. 25

Question 11

What is the relationship between the algebraic multiplicity and the geometric multiplicity of an eigenvalue, and why is it important for diagonalisation?

Answer

For any eigenvalue \(\lambda\) of a square matrix \(A\):

  • The **algebraic multiplicity** is the number of times \((\lambda_i - \lambda)\) appears as a factor in the characteristic polynomial.
  • The **geometric multiplicity** is the dimension of the eigenspace corresponding to \(\lambda\), which is \(\text{dim}(\text{Nul}(A - \lambda I))\).

The relationship is that the geometric multiplicity is always less than or equal to the algebraic multiplicity:

$$ 1 \le \text{geometric multiplicity} \le \text{algebraic multiplicity} $$

This is crucial for diagonalisation because a matrix is diagonalisable if and only if, for every eigenvalue, the geometric multiplicity is equal to the algebraic multiplicity (and all eigenvalues are real).

Source: anthony.pdf, p. 269

Question 12

If a matrix \(A\) has \(n\) distinct eigenvalues, is it always diagonalisable? Why or why not?

Answer

Yes. If an \(n \times n\) matrix \(A\) has \(n\) distinct eigenvalues, it is always diagonalisable.

This is because eigenvectors corresponding to distinct eigenvalues are always linearly independent. Since there are \(n\) distinct eigenvalues, we can find \(n\) corresponding eigenvectors, which will form a set of \(n\) linearly independent vectors. An \(n \times n\) matrix is diagonalisable if and only if it has \(n\) linearly independent eigenvectors.

Source: anthony.pdf, p. 265

Question 13

Outline the full procedure for solving \(\mathbf{y}' = A\mathbf{y}\) with initial condition \(\mathbf{y}(0)\) when \(A\) is diagonalisable.

Answer

  1. Find the eigenvalues \(\lambda_i\) and corresponding eigenvectors \(\mathbf{v}_i\) of \(A\).
  2. Construct the matrices \(P = [\mathbf{v}_1, ..., \mathbf{v}_n]\) and \(D = \text{diag}(\lambda_1, ..., \lambda_n)\).
  3. Define the change of variable \(\mathbf{y} = P\mathbf{z}\). The system becomes \(\mathbf{z}' = D\mathbf{z}\).
  4. Solve the uncoupled system: \(z_i(t) = z_i(0)e^{\lambda_i t}\).
  5. Find the initial conditions for \(\mathbf{z}\) using \(\mathbf{z}(0) = P^{-1}\mathbf{y}(0)\).
  6. Write the solution for \(\mathbf{z}(t)\).
  7. Transform back to find the final solution: \(\mathbf{y}(t) = P\mathbf{z}(t)\).
Source: 1_MT2175.pdf, pp. 18-19

Question 14

If \(J\) is a Jordan matrix, what is the structure of the solution to \(\mathbf{z}' = J\mathbf{z}\)?

Answer

The system \(\mathbf{z}' = J\mathbf{z}\) decouples into smaller systems, one for each Jordan block on the diagonal of \(J\). If \(J = \text{diag}(B_1, B_2, ..., B_r)\), then the variables corresponding to each block \(B_i\) can be solved independently of the variables for other blocks \(B_j\) where \(j \neq i\).

Within each block, the equations are solved by back substitution, starting from the last variable in that block's subsystem.

Source: 1_MT2175.pdf, p. 27

Question 15

For a \(2 \times 2\) Jordan block \(B = \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}\), what is the general solution to \(\mathbf{w}' = B\mathbf{w}\)?

Answer

The system of equations is:

$$ w'_1 = \lambda w_1 + w_2 $$ $$ w'_2 = \lambda w_2 $$

Solving the second equation first gives \(w_2(t) = c_2 e^{\lambda t}\).

Substituting this into the first equation gives \(w'_1 - \lambda w_1 = c_2 e^{\lambda t}\). This is a first-order linear ODE whose solution (using an integrating factor of \(e^{-\lambda t}\)) is:

$$ w_1(t) = c_1 e^{\lambda t} + c_2 t e^{\lambda t} $$

This matches the general formula from Theorem 2.2 for \(k=2\).

Source: 1_MT2175.pdf, p. 29

Question 16

What is the definition of a Hermitian matrix?

Answer

A square complex matrix \(A\) is **Hermitian** if it is equal to its conjugate transpose (also known as adjoint), denoted \(A^*\) or \(A^H\).

$$ A = A^* \quad \text{or} \quad A = (\overline{A})^T $$

This means that \(a_{ij} = \overline{a_{ji}}\) for all \(i, j\). If \(A\) is a real matrix, then a Hermitian matrix is simply a symmetric matrix (\(A = A^T\)).

Source: anthony.pdf, p. 403

Question 17

What is the definition of a Unitary matrix?

Answer

A square complex matrix \(U\) is **Unitary** if its conjugate transpose is also its inverse.

$$ U^*U = UU^* = I $$

This implies that \(U^* = U^{-1}\). If \(U\) is a real matrix, then a unitary matrix is an orthogonal matrix (\(U^T = U^{-1}\)). Unitary matrices preserve the inner product and thus the length of complex vectors.

Source: anthony.pdf, p. 404

Question 18

What is the definition of a Normal matrix?

Answer

A square complex matrix \(A\) is **Normal** if it commutes with its conjugate transpose.

$$ AA^* = A^*A $$

Hermitian matrices and Unitary matrices are special cases of normal matrices. Normal matrices are precisely those matrices that are diagonalisable by a unitary matrix (i.e., they have a complete set of orthonormal eigenvectors).

Source: anthony.pdf, p. 405

Question 19

State the Spectral Theorem for Hermitian matrices.

Answer

The Spectral Theorem for Hermitian matrices states that if \(A\) is a Hermitian matrix, then:

  1. All eigenvalues of \(A\) are real.
  2. Eigenvectors corresponding to distinct eigenvalues are orthogonal.
  3. \(A\) is unitarily diagonalisable, meaning there exists a unitary matrix \(U\) such that \(U^*AU = D\), where \(D\) is a diagonal matrix with the eigenvalues of \(A\) on its diagonal. The columns of \(U\) are an orthonormal basis of eigenvectors for \(A\).
Source: anthony.pdf, p. 406

Question 20

How can the solution to \(\mathbf{y}' = A\mathbf{y}\) be expressed using the matrix exponential \(e^{At}\)?

Answer

The general solution to the system of linear differential equations \(\mathbf{y}' = A\mathbf{y}\) can be expressed using the matrix exponential as:

$$ \mathbf{y}(t) = e^{At} \mathbf{y}(0) $$

where \(\mathbf{y}(0)\) is the initial condition vector. The matrix exponential \(e^{At}\) is defined by the power series:

$$ e^{At} = I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{(At)^k}{k!} $$
Source: 1_MT2175.pdf, p. 30

Question 21

If \(A\) is a diagonalisable matrix with \(D = P^{-1}AP\), how can \(e^{At}\) be computed?

Answer

If \(A\) is diagonalisable, then \(A = PDP^{-1}\). Using this, the matrix exponential can be computed as:

$$ e^{At} = P e^{Dt} P^{-1} $$

where \(e^{Dt}\) is easily computed if \(D = \text{diag}(\lambda_1, ..., \lambda_n)\):

$$ e^{Dt} = \text{diag}(e^{\lambda_1 t}, ..., e^{\lambda_n t}) $$

This simplifies the computation of \(e^{At}\) significantly.

Source: 1_MT2175.pdf, p. 31

Question 22

If \(A\) is a matrix in Jordan normal form \(J\), how can \(e^{Jt}\) be computed?

Answer

If \(J\) is a Jordan matrix, then \(e^{Jt}\) is a block diagonal matrix where each block corresponds to a Jordan block \(B_i\) of \(J\).

For a single \(k \times k\) Jordan block \(B = \lambda I + N\) (where \(N\) is the nilpotent part with 1s on the superdiagonal), the exponential is:

$$ e^{Bt} = e^{\lambda t} e^{Nt} = e^{\lambda t} (I + Nt + \frac{(Nt)^2}{2!} + \cdots + \frac{(Nt)^{k-1}}{(k-1)!}) $$

Since \(N^k = 0\), the series terminates. For example, for a \(3 \times 3\) Jordan block:

$$ e^{Bt} = e^{\lambda t} \begin{pmatrix} 1 & t & t^2/2! \\ 0 & 1 & t \\ 0 & 0 & 1 \end{pmatrix} $$
Source: 1_MT2175.pdf, p. 32

Question 23

What is the definition of an eigenvector and eigenvalue?

Answer

An **eigenvector** of a square matrix \(A\) is a non-zero vector \(\mathbf{v}\) such that when \(A\) multiplies \(\mathbf{v}\), the result is a scalar multiple of \(\mathbf{v}\).

The scalar \(\lambda\) is called the **eigenvalue** corresponding to \(\mathbf{v}\).

Mathematically, this is expressed as:

$$ A\mathbf{v} = \lambda\mathbf{v} $$
Source: anthony.pdf, p. 255

Question 24

How do you find the eigenvalues of a matrix \(A\)?

Answer

To find the eigenvalues \(\lambda\) of a matrix \(A\), you need to solve the characteristic equation:

$$ \text{det}(A - \lambda I) = 0 $$

where \(I\) is the identity matrix of the same dimension as \(A\). The roots of this polynomial equation are the eigenvalues.

Source: anthony.pdf, p. 257

Question 25

How do you find the eigenvectors corresponding to a given eigenvalue \(\lambda\)?

Answer

Once an eigenvalue \(\lambda\) is found, the corresponding eigenvectors \(\mathbf{v}\) are the non-zero solutions to the homogeneous system:

$$ (A - \lambda I)\mathbf{v} = \mathbf{0} $$

This involves finding the null space (or kernel) of the matrix \((A - \lambda I)\). The basis vectors for this null space are the eigenvectors for \(\lambda\).

Source: anthony.pdf, p. 258

Question 26

What is the definition of a diagonalisable matrix?

Answer

A square matrix \(A\) is **diagonalisable** if it is similar to a diagonal matrix. This means there exists an invertible matrix \(P\) and a diagonal matrix \(D\) such that:

$$ A = PDP^{-1} \quad \text{or equivalently} \quad D = P^{-1}AP $$

The columns of \(P\) are the linearly independent eigenvectors of \(A\), and the diagonal entries of \(D\) are the corresponding eigenvalues.

Source: anthony.pdf, p. 263

Question 27

What are the conditions for a matrix to be diagonalisable?

Answer

An \(n \times n\) matrix \(A\) is diagonalisable if and only if:

  1. The sum of the dimensions of its eigenspaces equals \(n\).
  2. It has \(n\) linearly independent eigenvectors.
  3. For each eigenvalue, its geometric multiplicity equals its algebraic multiplicity.

If \(A\) has \(n\) distinct eigenvalues, it is automatically diagonalisable.

Source: anthony.pdf, p. 269

Question 28

Explain the concept of a basis of eigenvectors.

Answer

A **basis of eigenvectors** for an \(n \times n\) matrix \(A\) is a set of \(n\) linearly independent eigenvectors of \(A\) that span the entire vector space \(\mathbb{R}^n\) (or \(\mathbb{C}^n\)).

If a matrix has such a basis, it means that any vector in the space can be written as a linear combination of these eigenvectors. This is precisely the condition for a matrix to be diagonalisable.

Source: anthony.pdf, p. 263

Question 29

What is the significance of the matrix \(P\) in the diagonalisation \(A = PDP^{-1}\)?

Answer

The matrix \(P\) is the **change-of-basis matrix** whose columns are the linearly independent eigenvectors of \(A\). It transforms coordinates from the eigenvector basis to the standard basis.

Its inverse, \(P^{-1}\), transforms coordinates from the standard basis to the eigenvector basis. When we write \(D = P^{-1}AP\), it means that \(A\) acts like the diagonal matrix \(D\) when viewed in the basis of its eigenvectors.

Source: anthony.pdf, p. 263

Question 30

What is the definition of a nilpotent matrix?

Answer

A square matrix \(N\) is **nilpotent** if some positive integer power of \(N\) is the zero matrix. That is, \(N^k = 0\) for some integer \(k \ge 1\).

For example, the matrix \(\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}\) is nilpotent because its square is the zero matrix.

Source: 1_MT2175.pdf, p. 24

Question 31

How is a Jordan block related to a nilpotent matrix?

Answer

A Jordan block \(B\) with eigenvalue \(\lambda\) can be written as the sum of a scalar multiple of the identity matrix and a nilpotent matrix:

$$ B = \lambda I + N $$

where \(N\) is a nilpotent matrix with 1s on the superdiagonal and 0s elsewhere. For example, for a \(3 \times 3\) Jordan block:

$$ \begin{pmatrix} \lambda & 1 & 0 \\ 0 & \lambda & 1 \\ 0 & 0 & \lambda \end{pmatrix} = \lambda \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} + \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} $$

The matrix \(N\) here is nilpotent, as \(N^3 = 0\).

Source: 1_MT2175.pdf, p. 24

Question 32

What is the significance of the uniqueness of the Jordan normal form?

Answer

The Jordan normal form of a matrix is unique up to the ordering of the Jordan blocks. This uniqueness means that the JNF provides a canonical form for every square matrix under similarity.

It allows us to classify matrices and determine if two matrices are similar: two matrices are similar if and only if they have the same Jordan normal form (up to block ordering).

Source: 1_MT2175.pdf, p. 23

Question 33

How does the size of a Jordan block relate to the algebraic and geometric multiplicities of its eigenvalue?

Answer

For a given eigenvalue \(\lambda\):

  • The **algebraic multiplicity** of \(\lambda\) is the sum of the sizes of all Jordan blocks corresponding to \(\lambda\).
  • The **geometric multiplicity** of \(\lambda\) is the number of Jordan blocks corresponding to \(\lambda\).

If the geometric multiplicity is less than the algebraic multiplicity, it means there is at least one Jordan block of size greater than 1.

Source: 1_MT2175.pdf, p. 23

Question 34

What is the definition of a chain of generalised eigenvectors?

Answer

A **chain of generalised eigenvectors** of length \(k\) corresponding to an eigenvalue \(\lambda\) is a sequence of non-zero vectors \(\mathbf{v}_1, \mathbf{v}_2, ..., \mathbf{v}_k\) such that:

$$ (A - \lambda I)\mathbf{v}_1 = \mathbf{0} \quad (\text{so } \mathbf{v}_1 \text{ is an eigenvector}) $$ $$ (A - \lambda I)\mathbf{v}_2 = \mathbf{v}_1 $$ $$ \vdots $$ $$ (A - \lambda I)\mathbf{v}_k = \mathbf{v}_{k-1} $$

This can be written more compactly as \((A - \lambda I)^j \mathbf{v}_j = \mathbf{0}\) for \(j=1,...,k\) and \((A - \lambda I)^j \mathbf{v}_k = \mathbf{v}_{k-j}\).

Source: 1_MT2175.pdf, p. 25

Question 35

How are chains of generalised eigenvectors used to construct the matrix \(P\) for Jordan normal form?

Answer

The columns of the matrix \(P\) (such that \(J = P^{-1}AP\)) are formed by concatenating the chains of generalised eigenvectors. Each chain corresponds to a Jordan block.

For a Jordan block of size \(k\) corresponding to \(\lambda\), the columns of \(P\) would be \([\mathbf{v}_1, \mathbf{v}_2, ..., \mathbf{v}_k]\) where these vectors form a chain of generalised eigenvectors.

Source: 1_MT2175.pdf, p. 25

Question 36

What is the primary difference in solving \(\mathbf{y}' = A\mathbf{y}\) when \(A\) is diagonalisable versus when it is not?

Answer

When \(A\) is diagonalisable, the change of variables \(\mathbf{y} = P\mathbf{z}\) leads to a completely uncoupled system \(\mathbf{z}' = D\mathbf{z}\) where \(D\) is diagonal, and each \(z_i' = \lambda_i z_i\) can be solved independently.

When \(A\) is not diagonalisable, the change of variables leads to \(\mathbf{z}' = J\mathbf{z}\) where \(J\) is in Jordan normal form. This system is not completely uncoupled; the equations within each Jordan block are coupled (e.g., \(z_1' = \lambda z_1 + z_2\)), requiring sequential solving (back substitution).

Source: 1_MT2175.pdf, pp. 17, 25

Question 37

What is the definition of a real symmetric matrix?

Answer

A square real matrix \(A\) is **symmetric** if it is equal to its transpose, i.e., \(A = A^T\).

This means that \(a_{ij} = a_{ji}\) for all \(i, j\). Real symmetric matrices are a special case of Hermitian matrices, and they have many desirable properties, such as always being diagonalisable by an orthogonal matrix and having real eigenvalues.

Source: anthony.pdf, p. 403

Question 38

State the Spectral Theorem for Real Symmetric matrices.

Answer

The Spectral Theorem for Real Symmetric matrices states that if \(A\) is a real symmetric matrix, then:

  1. All eigenvalues of \(A\) are real.
  2. Eigenvectors corresponding to distinct eigenvalues are orthogonal.
  3. \(A\) is orthogonally diagonalisable, meaning there exists an orthogonal matrix \(P\) (i.e., \(P^T = P^{-1}\)) such that \(P^TAP = D\), where \(D\) is a diagonal matrix with the eigenvalues of \(A\) on its diagonal. The columns of \(P\) are an orthonormal basis of eigenvectors for \(A\).

Source: anthony.pdf, p. 406

Question 39

What is the definition of an orthogonal matrix?

Answer

A square real matrix \(P\) is **orthogonal** if its transpose is also its inverse.

$$ P^TP = PP^T = I $$

This implies that \(P^T = P^{-1}\). Orthogonal matrices preserve the dot product and thus the length and angle of real vectors. Their columns (and rows) form an orthonormal basis.

Source: anthony.pdf, p. 404

Question 40

How can the solution to a non-homogeneous system \(\mathbf{y}' = A\mathbf{y} + \mathbf{f}(t)\) be found?

Answer

The general solution to a non-homogeneous system \(\mathbf{y}' = A\mathbf{y} + \mathbf{f}(t)\) is the sum of the general solution to the homogeneous system (\(\mathbf{y}_h(t) = e^{At}\mathbf{c}\)) and a particular solution to the non-homogeneous system (\(\mathbf{y}_p(t)\)).

One method to find \(\mathbf{y}_p(t)\) is **variation of parameters**:

$$ \mathbf{y}_p(t) = e^{At} \int e^{-As} \mathbf{f}(s) ds $$

Alternatively, if \(A\) is diagonalisable, one can transform the system into the diagonal basis, solve the uncoupled non-homogeneous equations, and then transform back.

Source: 1_MT2175.pdf, p. 33

Question 41

What is the definition of a positive definite matrix?

Answer

A symmetric real matrix \(A\) is **positive definite** if for all non-zero vectors \(\mathbf{x} \in \mathbb{R}^n\), the quadratic form \(\mathbf{x}^TA\mathbf{x} > 0\).

Equivalently, all eigenvalues of a positive definite matrix are strictly positive. Positive definite matrices are important in optimization, stability analysis of differential equations, and defining inner products.

Source: anthony.pdf, p. 410

Question 42

What is the definition of a negative definite matrix?

Answer

A symmetric real matrix \(A\) is **negative definite** if for all non-zero vectors \(\mathbf{x} \in \mathbb{R}^n\), the quadratic form \(\mathbf{x}^TA\mathbf{x} < 0\).

Equivalently, all eigenvalues of a negative definite matrix are strictly negative.

Source: anthony.pdf, p. 410

Question 43

What is the definition of a positive semi-definite matrix?

Answer

A symmetric real matrix \(A\) is **positive semi-definite** if for all non-zero vectors \(\mathbf{x} \in \mathbb{R}^n\), the quadratic form \(\mathbf{x}^TA\mathbf{x} \ge 0\).

Equivalently, all eigenvalues of a positive semi-definite matrix are non-negative (greater than or equal to zero).

Source: anthony.pdf, p. 410

Question 44

What is the definition of a negative semi-definite matrix?

Answer

A symmetric real matrix \(A\) is **negative semi-definite** if for all non-zero vectors \(\mathbf{x} \in \mathbb{R}^n\), the quadratic form \(\mathbf{x}^TA\mathbf{x} \le 0\).

Equivalently, all eigenvalues of a negative semi-definite matrix are non-positive (less than or equal to zero).

Source: anthony.pdf, p. 410

Question 45

What is the definition of an indefinite matrix?

Answer

A symmetric real matrix \(A\) is **indefinite** if it is neither positive semi-definite nor negative semi-definite. This means there exist vectors \(\mathbf{x}\) and \(\mathbf{y}\) such that \(\mathbf{x}^TA\mathbf{x} > 0\) and \(\mathbf{y}^TA\mathbf{y} < 0\).

Equivalently, an indefinite matrix has both positive and negative eigenvalues.

Source: anthony.pdf, p. 410

Question 46

How can the stability of the equilibrium point \(\mathbf{y} = \mathbf{0}\) for the system \(\mathbf{y}' = A\mathbf{y}\) be determined from the eigenvalues of \(A\)?

Answer

The stability of the equilibrium point \(\mathbf{y} = \mathbf{0}\) is determined by the real parts of the eigenvalues of \(A\):

  • If all eigenvalues have **negative real parts**, then \(\mathbf{0}\) is an asymptotically stable equilibrium (solutions approach \(\mathbf{0}\) as \(t \to \infty\)).
  • If at least one eigenvalue has a **positive real part**, then \(\mathbf{0}\) is an unstable equilibrium (solutions move away from \(\mathbf{0}\)).
  • If all eigenvalues have **non-positive real parts**, and those with zero real parts have geometric multiplicity equal to their algebraic multiplicity (i.e., corresponding Jordan blocks are \(1 \times 1\)), then \(\mathbf{0}\) is a stable equilibrium (solutions remain bounded).
  • If there is an eigenvalue with a **zero real part** whose geometric multiplicity is less than its algebraic multiplicity (i.e., a Jordan block of size > 1 for \(\lambda = 0\)), then \(\mathbf{0}\) is unstable.
Source: 1_MT2175.pdf, p. 34

Question 47

What is the definition of a stable equilibrium point?

Answer

An equilibrium point is **stable** if solutions that start sufficiently close to the equilibrium point remain close to it for all future time. This means that for any \(\epsilon > 0\), there exists a \(\delta > 0\) such that if \(\Vert \mathbf{y}(0) - \mathbf{y}_{eq} \Vert < \delta\), then \(\Vert \mathbf{y}(t) - \mathbf{y}_{eq} \Vert < \epsilon\) for all \(t \ge 0\).

Source: 1_MT2175.pdf, p. 34

Question 48

What is the definition of an asymptotically stable equilibrium point?

Answer

An equilibrium point is **asymptotically stable** if it is stable, and additionally, solutions that start sufficiently close to the equilibrium point not only remain close but also approach the equilibrium point as time goes to infinity. That is, \(\lim_{t \to \infty} \mathbf{y}(t) = \mathbf{y}_{eq}\).

Source: 1_MT2175.pdf, p. 34

Question 49

What is the definition of an unstable equilibrium point?

Answer

An equilibrium point is **unstable** if it is not stable. This means that there exist solutions starting arbitrarily close to the equilibrium point that eventually move away from it.

Source: 1_MT2175.pdf, p. 34

Question 50

What is the relationship between the eigenvalues of \(A\) and the eigenvalues of \(A^k\)?

Answer

If \(\lambda\) is an eigenvalue of \(A\) with corresponding eigenvector \(\mathbf{v}\), then \(\lambda^k\) is an eigenvalue of \(A^k\) with the same eigenvector \(\mathbf{v}\).

This can be shown by applying \(A\) repeatedly:

$$ A^2\mathbf{v} = A(A\mathbf{v}) = A(\lambda\mathbf{v}) = \lambda(A\mathbf{v}) = \lambda(\lambda\mathbf{v}) = \lambda^2\mathbf{v} $$

And by induction, \(A^k\mathbf{v} = \lambda^k\mathbf{v}\).

Source: anthony.pdf, p. 260

Question 51

What is the relationship between the eigenvalues of \(A\) and the eigenvalues of \(A^{-1}\) (if \(A\) is invertible)?

Answer

If \(A\) is an invertible matrix and \(\lambda\) is an eigenvalue of \(A\) with corresponding eigenvector \(\mathbf{v}\), then \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{-1}\) with the same eigenvector \(\mathbf{v}\).

This can be shown by multiplying \(A\mathbf{v} = \lambda\mathbf{v}\) by \(A^{-1}\):

$$ A^{-1}(A\mathbf{v}) = A^{-1}(\lambda\mathbf{v}) $$ $$ I\mathbf{v} = \lambda A^{-1}\mathbf{v} $$ $$ \mathbf{v} = \lambda A^{-1}\mathbf{v} $$ $$ \frac{1}{\lambda}\mathbf{v} = A^{-1}\mathbf{v} $$

Note that \(\lambda \neq 0\) for \(A\) to be invertible.

Source: anthony.pdf, p. 260

Question 52

What is the relationship between the eigenvalues of \(A\) and the eigenvalues of \(A^T\)?

Answer

A matrix \(A\) and its transpose \(A^T\) have the same eigenvalues. This is because they have the same characteristic polynomial:

$$ \text{det}(A - \lambda I) = \text{det}((A - \lambda I)^T) = \text{det}(A^T - \lambda I^T) = \text{det}(A^T - \lambda I) $$

However, their eigenvectors are generally different.

Source: anthony.pdf, p. 260

Question 53

What is the definition of a defective matrix?

Answer

A square matrix is **defective** if it does not have a complete set of linearly independent eigenvectors. This occurs when, for at least one eigenvalue, its geometric multiplicity is strictly less than its algebraic multiplicity.

Defective matrices are not diagonalisable, but they do have a Jordan normal form.

Source: anthony.pdf, p. 269

Question 54

What is the relationship between the trace of a matrix and its eigenvalues?

Answer

The **trace** of a square matrix (the sum of its diagonal entries) is equal to the sum of its eigenvalues (counted with algebraic multiplicity).

$$ \text{tr}(A) = \sum_{i=1}^n a_{ii} = \sum_{i=1}^n \lambda_i $$
Source: anthony.pdf, p. 261

Question 55

What is the relationship between the determinant of a matrix and its eigenvalues?

Answer

The **determinant** of a square matrix is equal to the product of its eigenvalues (counted with algebraic multiplicity).

$$ \text{det}(A) = \prod_{i=1}^n \lambda_i $$
Source: anthony.pdf, p. 261

Question 56

Explain how complex eigenvalues arise in real matrices and their implications for eigenvectors.

Answer

For a real matrix, if \(\lambda = a + bi\) is a complex eigenvalue (where \(b \neq 0\)), then its complex conjugate \(\overline{\lambda} = a - bi\) must also be an eigenvalue. The corresponding eigenvectors will also be complex conjugates of each other.

This means that complex eigenvalues always appear in conjugate pairs for real matrices. While the matrix cannot be diagonalised over the real numbers, it can be diagonalised over the complex numbers.

Source: anthony.pdf, p. 401

Question 57

How can a real matrix with complex conjugate eigenvalues be transformed into a block diagonal form?

Answer

If a real matrix \(A\) has a complex eigenvalue \(\lambda = a - bi\) with eigenvector \(\mathbf{v}\), then \(\overline{\lambda} = a + bi\) is also an eigenvalue with eigenvector \(\overline{\mathbf{v}}\) . We can construct a real invertible matrix \(P\) such that \(P^{-1}AP = C\), where \(C\) is a block diagonal matrix with \(2 \times 2\) blocks of the form:

$$ \begin{pmatrix} a & -b \\ b & a \end{pmatrix} $$

This transformation allows us to analyze the system in real terms without explicitly using complex numbers in the final solution.

Source: anthony.pdf, p. 402

Question 58

What is the general solution for a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\) where \(A\) has complex conjugate eigenvalues \(a \pm bi\)?

Answer

If \(A\) has complex conjugate eigenvalues \(a \pm bi\) with corresponding complex eigenvectors \(\mathbf{v} = \mathbf{v}_R + i\mathbf{v}_I\) and \(\overline{\mathbf{v}} = \mathbf{v}_R - i\mathbf{v}_I\), then two linearly independent real solutions are:

$$ \mathbf{y}_1(t) = e^{at} (\mathbf{v}_R \cos(bt) - \mathbf{v}_I \sin(bt)) $$ $$ \mathbf{y}_2(t) = e^{at} (\mathbf{v}_R \sin(bt) + \mathbf{v}_I \cos(bt)) $$

The general real solution is \(\mathbf{y}(t) = c_1 \mathbf{y}_1(t) + c_2 \mathbf{y}_2(t)\).

Source: anthony.pdf, p. 402

Question 59

What is the definition of a quadratic form?

Answer

A **quadratic form** is a function \(Q: \mathbb{R}^n \to \mathbb{R}\) defined by \(Q(\mathbf{x}) = \mathbf{x}^TA\mathbf{x}\), where \(A\) is a symmetric \(n \times n\) real matrix and \(\mathbf{x}\) is a vector in \(\mathbb{R}^n\).

Quadratic forms are used in various areas, including optimization, geometry (describing conic sections and quadric surfaces), and physics.

Source: anthony.pdf, p. 409

Question 60

How can the nature of a quadratic form (positive definite, etc.) be determined from the eigenvalues of its associated symmetric matrix?

Answer

For a symmetric matrix \(A\) associated with a quadratic form \(Q(\mathbf{x}) = \mathbf{x}^TA\mathbf{x}\):

  • \(Q\) is positive definite if and only if all eigenvalues of \(A\) are \(> 0\).
  • \(Q\) is negative definite if and only if all eigenvalues of \(A\) are \(< 0\).
  • \(Q\) is positive semi-definite if and only if all eigenvalues of \(A\) are \(\ge 0\).
  • \(Q\) is negative semi-definite if and only if all eigenvalues of \(A\) are \(\le 0\).
  • \(Q\) is indefinite if and only if \(A\) has both positive and negative eigenvalues.
Source: anthony.pdf, p. 410

Question 61

What is the definition of a singular matrix in terms of eigenvalues?

Answer

A square matrix \(A\) is **singular** (non-invertible) if and only if \(0\) is an eigenvalue of \(A\).

This is because \(\text{det}(A - 0I) = \text{det}(A)\). If \(0\) is an eigenvalue, then \(\text{det}(A) = 0\), which means the matrix is singular.

Source: anthony.pdf, p. 261

Question 62

What is the relationship between the rank of a matrix and its eigenvalues?

Answer

The **rank** of a matrix is the number of non-zero eigenvalues (counted with algebraic multiplicity) if the matrix is diagonalisable. More generally, the rank is the number of non-zero eigenvalues in its Jordan normal form.

The nullity (dimension of the null space) is the algebraic multiplicity of the eigenvalue \(0\).

Source: anthony.pdf, p. 261

Question 63

How can the Cayley-Hamilton Theorem be used in the context of matrix exponentials?

Answer

The Cayley-Hamilton Theorem states that every square matrix satisfies its own characteristic equation. This means if \(p(\lambda) = \text{det}(A - \lambda I) = c_n\lambda^n + \cdots + c_1\lambda + c_0\), then \(p(A) = c_nA^n + \cdots + c_1A + c_0I = 0\).

This theorem allows us to express higher powers of \(A\) as linear combinations of lower powers of \(A\) (up to \(A^{n-1}\)). This can simplify the computation of \(e^{At}\) by reducing the infinite series to a finite sum involving powers of \(A\) up to \(A^{n-1}\).

Source: anthony.pdf, p. 275

Question 64

What is the definition of a generalised eigenspace?

Answer

The **generalised eigenspace** \(K_{\lambda}\) corresponding to an eigenvalue \(\lambda\) of a matrix \(A\) is the set of all vectors \(\mathbf{v}\) such that \((A - \lambda I)^k \mathbf{v} = \mathbf{0}\) for some positive integer \(k\).

The dimension of the generalised eigenspace for \(\lambda\) is equal to the algebraic multiplicity of \(\lambda\).

Source: 1_MT2175.pdf, p. 25

Question 65

How do generalised eigenspaces relate to the Jordan normal form?

Answer

The entire vector space \(\mathbb{C}^n\) can be decomposed into a direct sum of the generalised eigenspaces of \(A\).

Each Jordan block corresponds to a subspace of a generalised eigenspace. The basis for the Jordan normal form consists of vectors from these generalised eigenspaces, specifically the chains of generalised eigenvectors.

Source: 1_MT2175.pdf, p. 25

Question 66

What is the definition of a fundamental matrix for \(\mathbf{y}' = A\mathbf{y}\)?

Answer

A **fundamental matrix** \(\Phi(t)\) for the system \(\mathbf{y}' = A\mathbf{y}\) is any matrix whose columns are \(n\) linearly independent solutions to the system.

If \(\mathbf{y}_1(t), ..., \mathbf{y}_n(t)\) are linearly independent solutions, then \(\Phi(t) = [\mathbf{y}_1(t) \cdots \mathbf{y}_n(t)]\).

The general solution can then be written as \(\mathbf{y}(t) = \Phi(t)\mathbf{c}\), where \(\mathbf{c}\) is a constant vector.

Source: 1_MT2175.pdf, p. 30

Question 67

How is the matrix exponential \(e^{At}\) related to a fundamental matrix?

Answer

The matrix exponential \(e^{At}\) is a special fundamental matrix. Specifically, it is the unique fundamental matrix \(\Phi(t)\) that satisfies the initial condition \(\Phi(0) = I\) (the identity matrix).

Any fundamental matrix \(\Psi(t)\) can be related to \(e^{At}\) by \(\Psi(t) = e^{At}C\) for some constant invertible matrix \(C\).

Source: 1_MT2175.pdf, p. 30

Question 68

What is the Wronskian of a set of solutions to a system of differential equations, and what is its significance?

Answer

For a set of \(n\) solutions \(\mathbf{y}_1(t), ..., \mathbf{y}_n(t)\) to \(\mathbf{y}' = A\mathbf{y}\), the **Wronskian** \(W(t)\) is the determinant of the matrix whose columns are these solutions:

$$ W(t) = \text{det}([\mathbf{y}_1(t) \cdots \mathbf{y}_n(t)]) $$

Its significance is that the solutions are linearly independent if and only if the Wronskian is non-zero for at least one point \(t\) (and thus for all \(t\)). If \(W(t) = 0\) for some \(t\), then the solutions are linearly dependent.

Source: 1_MT2175.pdf, p. 30

Question 69

State Abel's Theorem for the Wronskian of solutions to \(\mathbf{y}' = A\mathbf{y}\).

Answer

Abel's Theorem states that for a system \(\mathbf{y}' = A\mathbf{y}\), the Wronskian \(W(t)\) of any set of \(n\) solutions satisfies:

$$ W(t) = W(t_0) e^{\int_{t_0}^t \text{tr}(A(s)) ds} $$

If \(A\) is a constant matrix, this simplifies to \(W(t) = W(t_0) e^{\text{tr}(A)(t - t_0)}\).

This theorem implies that if the Wronskian is non-zero at one point, it is non-zero everywhere, confirming the linear independence criterion.

Source: 1_MT2175.pdf, p. 30

Question 70

What is the definition of a stable node in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), a **stable node** occurs when both eigenvalues are real, distinct, and negative (\(\lambda_1 < \lambda_2 < 0\)).

In the phase portrait, all trajectories approach the origin as \(t \to \infty\). Trajectories are tangent to the eigenvector corresponding to the eigenvalue closer to zero, except for those along the other eigenvector.

Source: 1_MT2175.pdf, p. 35

Question 71

What is the definition of an unstable node in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), an **unstable node** occurs when both eigenvalues are real, distinct, and positive (\(0 < \lambda_2 < \lambda_1\)).

In the phase portrait, all trajectories move away from the origin as \(t \to \infty\). Trajectories are tangent to the eigenvector corresponding to the eigenvalue further from zero, except for those along the other eigenvector.

Source: 1_MT2175.pdf, p. 35

Question 72

What is the definition of a saddle point in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), a **saddle point** occurs when the eigenvalues are real and have opposite signs (\(\lambda_1 < 0 < \lambda_2\)).

In the phase portrait, trajectories approach the origin along the eigenvector corresponding to the negative eigenvalue and move away from the origin along the eigenvector corresponding to the positive eigenvalue. The origin is an unstable equilibrium.

Source: 1_MT2175.pdf, p. 35

Question 73

What is the definition of a stable spiral in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), a **stable spiral** occurs when the eigenvalues are complex conjugates with a negative real part (\(\lambda = a \pm bi\) with \(a < 0\)).

In the phase portrait, trajectories spiral inwards towards the origin as \(t \to \infty\). The origin is an asymptotically stable equilibrium.

Source: 1_MT2175.pdf, p. 35

Question 74

What is the definition of an unstable spiral in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), an **unstable spiral** occurs when the eigenvalues are complex conjugates with a positive real part (\(\lambda = a \pm bi\) with \(a > 0\)).

In the phase portrait, trajectories spiral outwards away from the origin as \(t \to \infty\). The origin is an unstable equilibrium.

Source: 1_MT2175.pdf, p. 35

Question 75

What is the definition of a center in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), a **center** occurs when the eigenvalues are purely imaginary (\(\lambda = \pm bi\) with \(b \neq 0\)).

In the phase portrait, trajectories are closed ellipses around the origin. The origin is a stable (but not asymptotically stable) equilibrium.

Source: 1_MT2175.pdf, p. 35

Question 76

What is the definition of a degenerate node in phase portraits of \(2 \times 2\) systems?

Answer

For a \(2 \times 2\) system \(\mathbf{y}' = A\mathbf{y}\), a **degenerate node** occurs when there is a repeated real eigenvalue (\(\lambda_1 = \lambda_2 = \lambda\)) and the matrix is not diagonalisable (i.e., only one linearly independent eigenvector).

If \(\lambda < 0\), it's a stable degenerate node; if \(\lambda > 0\), it's an unstable degenerate node. Trajectories either approach or recede from the origin, often appearing to align with a single direction.

Source: 1_MT2175.pdf, p. 35

Question 77

How does the phase portrait change if the repeated eigenvalue in a degenerate node case has two linearly independent eigenvectors?

Answer

If a repeated real eigenvalue has two linearly independent eigenvectors, the matrix is diagonalisable. In this case, the phase portrait is a **star node** (or proper node).

All trajectories move directly towards (if \(\lambda < 0\)) or away from (if \(\lambda > 0\)) the origin along straight lines, without any curvature or preferred direction.

Source: 1_MT2175.pdf, p. 35

Question 78

What is the definition of a linear transformation?

Answer

A transformation (or function) \(T: V \to W\) between two vector spaces \(V\) and \(W\) is a **linear transformation** if it satisfies two properties for all vectors \(\mathbf{u}, \mathbf{v}\) in \(V\) and all scalars \(c\):

  1. Additivity: \(T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})\)
  2. Homogeneity of degree 1: \(T(c\mathbf{u}) = cT(\mathbf{u})\)

These two properties can be combined into one: \(T(c\mathbf{u} + d\mathbf{v}) = cT(\mathbf{u}) + dT(\mathbf{v})\).

Source: anthony.pdf, p. 101

Question 79

How are matrices related to linear transformations?

Answer

Every linear transformation \(T: \mathbb{R}^n \to \mathbb{R}^m\) can be represented by an \(m \times n\) matrix \(A\) such that \(T(\mathbf{x}) = A\mathbf{x}\) for all \(\mathbf{x} \in \mathbb{R}^n\).

Conversely, every \(m \times n\) matrix defines a linear transformation from \(\mathbb{R}^n\) to \(\mathbb{R}^m\). This establishes a fundamental connection between linear algebra and matrix theory.

Source: anthony.pdf, p. 108

Question 80

What is the kernel (or null space) of a linear transformation?

Answer

The **kernel** (or **null space**) of a linear transformation \(T: V \to W\) is the set of all vectors \(\mathbf{v}\) in \(V\) that are mapped to the zero vector in \(W\).

$$ \text{Ker}(T) = \{\mathbf{v} \in V \mid T(\mathbf{v}) = \mathbf{0}\} $$

For a matrix transformation \(T(\mathbf{x}) = A\mathbf{x}\), the kernel is the null space of the matrix \(A\), i.e., the set of all solutions to \(A\mathbf{x} = \mathbf{0}\).

Source: anthony.pdf, p. 198

Question 81

What is the image (or range) of a linear transformation?

Answer

The **image** (or **range**) of a linear transformation \(T: V \to W\) is the set of all vectors in \(W\) that are the image of at least one vector in \(V\).

$$ \text{Im}(T) = \{T(\mathbf{v}) \mid \mathbf{v} \in V\} $$

For a matrix transformation \(T(\mathbf{x}) = A\mathbf{x}\), the image is the column space of the matrix \(A\), i.e., the span of the columns of \(A\).

Source: anthony.pdf, p. 200

Question 82

State the Rank-Nullity Theorem.

Answer

The **Rank-Nullity Theorem** states that for a linear transformation \(T: V \to W\) (or an \(m \times n\) matrix \(A\)), the dimension of the domain \(V\) is equal to the sum of the dimension of the kernel (nullity) and the dimension of the image (rank).

$$ \text{dim}(V) = \text{dim}(\text{Ker}(T)) + \text{dim}(\text{Im}(T)) $$

For an \(m \times n\) matrix \(A\), this is \(n = \text{nullity}(A) + \text{rank}(A)\).

Source: anthony.pdf, p. 203

Question 83

What is the definition of a basis for a vector space?

Answer

A **basis** for a vector space \(V\) is a set of vectors \(\{\mathbf{v}_1, ..., \mathbf{v}_k\}\) in \(V\) that satisfies two conditions:

  1. The set is linearly independent.
  2. The set spans \(V\) (i.e., every vector in \(V\) can be written as a linear combination of the vectors in the set).

The number of vectors in a basis is unique for a given vector space and is called the dimension of the vector space.

Source: anthony.pdf, p. 175

Question 84

What is the definition of linear independence?

Answer

A set of vectors \(\{\mathbf{v}_1, ..., \mathbf{v}_k\}\) is **linearly independent** if the only solution to the vector equation:

$$ c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k = \mathbf{0} $$

is the trivial solution \(c_1 = c_2 = \cdots = c_k = 0\).

If there is a non-trivial solution, the vectors are linearly dependent.

Source: anthony.pdf, p. 165

Question 85

What is the definition of a spanning set for a vector space?

Answer

A set of vectors \(\{\mathbf{v}_1, ..., \mathbf{v}_k\}\) is a **spanning set** for a vector space \(V\) if every vector in \(V\) can be expressed as a linear combination of the vectors in the set.

The set of all linear combinations of \(\{\mathbf{v}_1, ..., \mathbf{v}_k\}\) is called the span of these vectors, denoted \(\text{span}\{\mathbf{v}_1, ..., \mathbf{v}_k\}\).

Source: anthony.pdf, p. 170

Question 86

What is the definition of a subspace?

Answer

A **subspace** of a vector space \(V\) is a subset \(H\) of \(V\) that itself is a vector space under the same addition and scalar multiplication operations defined on \(V\).

To verify if a subset \(H\) is a subspace, one must check three conditions:

  1. The zero vector of \(V\) is in \(H\).
  2. \(H\) is closed under vector addition (if \(\mathbf{u}, \mathbf{v} \in H\), then \(\mathbf{u} + \mathbf{v} \in H\)).
  3. \(H\) is closed under scalar multiplication (if \(\mathbf{u} \in H\) and \(c\) is a scalar, then \(c\mathbf{u} \in H\)).
Source: anthony.pdf, p. 150

Question 87

What is the relationship between the column space of a matrix and its image as a linear transformation?

Answer

The **column space** of a matrix \(A\), denoted \(\text{Col}(A)\), is the set of all linear combinations of the columns of \(A\).

This is precisely the **image** (or range) of the linear transformation \(T(\mathbf{x}) = A\mathbf{x}\). So, \(\text{Col}(A) = \text{Im}(T)\).

Source: anthony.pdf, p. 200

Question 88

What is the relationship between the null space of a matrix and the kernel of its associated linear transformation?

Answer

The **null space** of a matrix \(A\), denoted \(\text{Nul}(A)\), is the set of all solutions to the homogeneous equation \(A\mathbf{x} = \mathbf{0}\).

This is precisely the **kernel** of the linear transformation \(T(\mathbf{x}) = A\mathbf{x}\). So, \(\text{Nul}(A) = \text{Ker}(T)\).

Source: anthony.pdf, p. 198

Question 89

What is the definition of an inner product space?

Answer

An **inner product space** is a vector space \(V\) equipped with an inner product, which is a function that takes two vectors \(\mathbf{u}, \mathbf{v} \in V\) and returns a scalar, denoted \(\langle \mathbf{u}, \mathbf{v} \rangle\), satisfying the following axioms for all \(\mathbf{u}, \mathbf{v}, \mathbf{w} \in V\) and scalar \(c\):

  1. Symmetry (or conjugate symmetry for complex spaces): \(\langle \mathbf{u}, \mathbf{v} \rangle = \overline{\langle \mathbf{v}, \mathbf{u} \rangle}\)
  2. Linearity in the first argument: \(\langle c\mathbf{u} + \mathbf{v}, \mathbf{w} \rangle = c\langle \mathbf{u}, \mathbf{w} \rangle + \langle \mathbf{v}, \mathbf{w} \rangle\)
  3. Positive-definiteness: \(\langle \mathbf{u}, \mathbf{u} \rangle \ge 0\), and \(\langle \mathbf{u}, \mathbf{u} \rangle = 0\) if and only if \(\mathbf{u} = \mathbf{0}\).

The standard dot product in \(\mathbb{R}^n\) is a common example of an inner product.

Source: anthony.pdf, p. 335

Question 90

What is the definition of an orthonormal basis?

Answer

An **orthonormal basis** for an inner product space is a basis \(\{\mathbf{u}_1, ..., \mathbf{u}_k\}\) such that all vectors in the basis are orthogonal to each other and each vector has a norm (length) of 1.

Mathematically, this means \(\langle \mathbf{u}_i, \mathbf{u}_j \rangle = 0\) for \(i \neq j\) and \(\langle \mathbf{u}_i, \mathbf{u}_i \rangle = 1\) for all \(i\).

Source: anthony.pdf, p. 345

Question 91

State the Gram-Schmidt process.

Answer

The **Gram-Schmidt process** is an algorithm for orthogonalizing a set of vectors in an inner product space. Given a basis \(\{\mathbf{x}_1, ..., \mathbf{x}_k\}\) for a subspace \(W\), it constructs an orthogonal basis \(\{\mathbf{v}_1, ..., \mathbf{v}_k\}\) for \(W\) as follows:

  1. \(\mathbf{v}_1 = \mathbf{x}_1\)
  2. \(\mathbf{v}_2 = \mathbf{x}_2 - \frac{\langle \mathbf{x}_2, \mathbf{v}_1 \rangle}{\langle \mathbf{v}_1, \mathbf{v}_1 \rangle}\mathbf{v}_1\)
  3. \(\mathbf{v}_3 = \mathbf{x}_3 - \frac{\langle \mathbf{x}_3, \mathbf{v}_1 \rangle}{\langle \mathbf{v}_1, \mathbf{v}_1 \rangle}\mathbf{v}_1 - \frac{\langle \mathbf{x}_3, \mathbf{v}_2 \rangle}{\langle \mathbf{v}_2, \mathbf{v}_2 \rangle}\mathbf{v}_2\)
  4. Continue this process until \(\mathbf{v}_k\) is found.

To get an orthonormal basis, each \(\mathbf{v}_i\) is then normalized by dividing by its norm: \(\mathbf{u}_i = \frac{\mathbf{v}_i}{\Vert \mathbf{v}_i \Vert}\).

Source: anthony.pdf, p. 348

Question 92

What is the QR factorization of a matrix?

Answer

The **QR factorization** of an \(m \times n\) matrix \(A\) with linearly independent columns is a decomposition \(A = QR\), where:

  • \(Q\) is an \(m \times n\) matrix with orthonormal columns.
  • \(R\) is an \(n \times n\) upper triangular matrix with positive diagonal entries.

The QR factorization can be obtained using the Gram-Schmidt process on the columns of \(A\). It is useful for solving least-squares problems, eigenvalue computations, and numerical stability.

Source: anthony.pdf, p. 355

Question 93

What is the definition of a least-squares solution to \(A\mathbf{x} = \mathbf{b}\)?

Answer

A **least-squares solution** to a system \(A\mathbf{x} = \mathbf{b}\) (which may be inconsistent) is a vector \(\hat{\mathbf{x}}\) that minimizes the distance \(\Vert \mathbf{b} - A\mathbf{x} \Vert\).

In other words, it finds the \(\mathbf{x}\) that makes \(A\mathbf{x}\) as close as possible to \(\mathbf{b}\). The least-squares solutions are the solutions to the normal equations:

$$ A^TA\mathbf{x} = A^T\mathbf{b} $$
Source: anthony.pdf, p. 365

Question 94

How can the least-squares solution be found using QR factorization?

Answer

If \(A = QR\) is the QR factorization of \(A\), then the normal equations \(A^TA\mathbf{x} = A^T\mathbf{b}\) become:

$$ (QR)^T(QR)\mathbf{x} = (QR)^T\mathbf{b} $$ $$ R^TQ^TQR\mathbf{x} = R^TQ^T\mathbf{b} $$

Since \(Q\) has orthonormal columns, \(Q^TQ = I\). So:

$$ R^TR\mathbf{x} = R^TQ^T\mathbf{b} $$

Since \(R\) is invertible (because \(A\) has linearly independent columns), we can multiply by \((R^T)^{-1}\):

$$ R\mathbf{x} = Q^T\mathbf{b} $$

This system can be solved efficiently by back substitution since \(R\) is upper triangular.

Source: anthony.pdf, p. 368

Question 95

What is the definition of a symmetric matrix in terms of its inner product properties?

Answer

A real matrix \(A\) is symmetric if and only if for all vectors \(\mathbf{u}, \mathbf{v}\) in \(\mathbb{R}^n\):

$$ \langle A\mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{u}, A\mathbf{v} \rangle $$

where \(\langle \cdot, \cdot \rangle\) denotes the standard dot product. This property is fundamental to the Spectral Theorem for symmetric matrices.

Source: anthony.pdf, p. 403

Question 96

What is the definition of a self-adjoint operator?

Answer

In an inner product space, a linear operator \(T: V \to V\) is **self-adjoint** if for all \(\mathbf{u}, \mathbf{v} \in V\):

$$ \langle T(\mathbf{u}), \mathbf{v} \rangle = \langle \mathbf{u}, T(\mathbf{v}) \rangle $$

For finite-dimensional real inner product spaces, a linear operator is self-adjoint if and only if its matrix representation with respect to an orthonormal basis is symmetric. For complex inner product spaces, it's self-adjoint if its matrix representation is Hermitian.

Source: anthony.pdf, p. 403

Question 97

What is the relationship between the eigenvalues of a real symmetric matrix and its definiteness?

Answer

For a real symmetric matrix \(A\):

  • \(A\) is positive definite if and only if all its eigenvalues are \(> 0\).
  • \(A\) is negative definite if and only if all its eigenvalues are \(< 0\).
  • \(A\) is positive semi-definite if and only if all its eigenvalues are \(\ge 0\).
  • \(A\) is negative semi-definite if and only if all its eigenvalues are \(\le 0\).
  • \(A\) is indefinite if and only if it has both positive and negative eigenvalues.
Source: anthony.pdf, p. 410

Question 98

What is the definition of a normal operator?

Answer

In an inner product space, a linear operator \(T: V \to V\) is **normal** if it commutes with its adjoint \(T^*\).

$$ TT^* = T^*T $$

For finite-dimensional complex inner product spaces, a linear operator is normal if and only if its matrix representation with respect to an orthonormal basis is a normal matrix (i.e., \(AA^* = A^*A\)). Normal operators are precisely those that are unitarily diagonalisable.

Source: anthony.pdf, p. 405

Question 99

What is the significance of the Schur Decomposition Theorem?

Answer

The **Schur Decomposition Theorem** states that every square complex matrix \(A\) can be decomposed as \(A = UTU^*\), where \(U\) is a unitary matrix and \(T\) is an upper triangular matrix whose diagonal entries are the eigenvalues of \(A\).

This theorem is significant because it shows that every matrix is unitarily equivalent to an upper triangular matrix. It is a weaker form of diagonalisation but is always possible, even for non-diagonalisable matrices. It is often used in numerical algorithms for eigenvalue computation.

Source: anthony.pdf, p. 407

Question 100

Summarize the complete procedure for solving a system of linear differential equations \(\mathbf{y}' = A\mathbf{y}\) when \(A\) is not diagonalisable.

Answer

  1. Find the eigenvalues of \(A\). Determine that it is not diagonalisable (i.e., for some eigenvalue, geometric multiplicity < algebraic multiplicity).
  2. Find the Jordan Normal Form \(J\) of \(A\) and the matrix \(P\) of generalised eigenvectors such that \(J = P^{-1}AP\).
  3. Perform the change of variables \(\mathbf{y} = P\mathbf{z}\) to get the new system \(\mathbf{z}' = J\mathbf{z}\).
  4. Solve the simpler system for \(\mathbf{z}(t)\) by solving for each Jordan block, using back substitution and Theorem 2.2 from the subject guide.
  5. If initial conditions \(\mathbf{y}(0)\) are given, find \(\mathbf{z}(0) = P^{-1}\mathbf{y}(0)\) to determine the constants of integration.
  6. Transform the solution back to the original variables using \(\mathbf{y}(t) = P\mathbf{z}(t)\).
Source: 1_MT2175.pdf, pp. 25-28
Card 1 of 100