Ch2 Lecture 5
Odds and ends
Block multiplication
Block Multiplication
Suppose we need to multiply these matrices…
\[ \left[\begin{array}{llll} 1 & 2 & 0 & 0 \\ 3 & 4 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array}\right]\left[\begin{array}{llll} 0 & 0 & 2 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] . \]
pause
Column Vectors as Blocks
\[ A \mathbf{x}=\left[\mathbf{a}_{1}, \mathbf{a}_{2}, \mathbf{a}_{3}\right]\left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \end{array}\right]=\mathbf{a}_{1} x_{1}+\mathbf{a}_{2} x_{2}+\mathbf{a}_{3} x_{3} \]
Transpose and Conjugate Transpose
Transpose and Conjugate Transpose
Let \(A=\left[a_{i j}\right]\) be an \(m \times n\) matrix with (possibly) complex entries.
The transpose of \(A\) is the \(n \times m\) matrix \(A^{T}\) obtained by interchanging the rows and columns of \(A\)
The conjugate of \(A\) is the matrix \(\bar{A}=\left[\overline{a_{i j}}\right]\)
Finally, the conjugate (Hermitian) transpose of \(A\) is the matrix \(A^{*}=\bar{A}^{T}\).
Example 1
Find the transpose and conjugate transpose of:
\[ \left[\begin{array}{lll}1 & 0 & 2 \\ 0 & 1 & 1\end{array}\right] \]
. . .
\[ \left[\begin{array}{lll} 1 & 0 & 2 \\ 0 & 1 & 1 \end{array}\right]^{*}=\left[\begin{array}{lll} 1 & 0 & 2 \\ 0 & 1 & 1 \end{array}\right]^{T}=\left[\begin{array}{ll} 1 & 0 \\ 0 & 1 \\ 2 & 1 \end{array}\right] \]
Example 2
Find the transpose and conjugate transpose of:
\[ \left[\begin{array}{rr}1 & 1+\mathrm{i} \\ 0 & 2 \mathrm{i}\end{array}\right] \]
. . .
\[ \left[\begin{array}{rr} 1 & 1+\mathrm{i} \\ 0 & 2 \mathrm{i} \end{array}\right]^{*}=\left[\begin{array}{rr} 1 & 0 \\ 1-\mathrm{i} & -2 \mathrm{i} \end{array}\right], \quad\left[\begin{array}{rr} 1 & 1+\mathrm{i} \\ 0 & 2 \mathrm{i} \end{array}\right]^{T}=\left[\begin{array}{rr} 1 & 0 \\ 1+\mathrm{i} & 2 \mathrm{i} \end{array}\right] \]
Laws of Matrix Transpose
Let \(A\) and \(B\) be matrices of the appropriate sizes so that the following operations make sense, and \(c\) a scalar.
\((A+B)^{T}=A^{T}+B^{T}\)
\((A B)^{T}=B^{T} A^{T}\)
\((c A)^{T}=c A^{T}\)
\(\left(A^{T}\right)^{T}=A\)
Symmetric and Hermetian Matrices
The matrix \(A\) is said to be:
- symmetric if \(A^{T}=A\)
- Hermitian if \(A^{*}=A\)
. . .
Is this matrix symmetric? Hermetian?
\(\left[\begin{array}{rr}1 & 1+\mathrm{i} \\ 1-\mathrm{i} & 2\end{array}\right]\)
It’s Hermetian, but not symmetric.
\[ \left[\begin{array}{rr} 1 & 1+\mathrm{i} \\ 1-\mathrm{i} & 2 \end{array}\right]^{*}=\left[\begin{array}{rr} 1 & \overline{1+\mathrm{i}} \\ \overline{1-\mathrm{i}} & 2 \end{array}\right]^{T}=\left[\begin{array}{rr} 1 & 1+\mathrm{i} \\ 1-\mathrm{i} & 2 \end{array}\right] \]
Inner and outer products
Inner product
Let \(\mathbf{u}\) and \(\mathbf{v}\) be column vectors of the same size, say \(n \times 1\).
Then the inner product of \(\mathbf{u}\) and \(\mathbf{v}\) is the scalar quantity \(\mathbf{u}^{T} \mathbf{v}\)
. . .
Find the inner product of \[ \mathbf{u}=\left[\begin{array}{r} 2 \\ -1 \\ 1 \end{array}\right] \text { and } \mathbf{v}=\left[\begin{array}{l} 3 \\ 4 \\ 1 \end{array}\right] \]
\[ \mathbf{u}^{T} \mathbf{v}=[2,-1,1]\left[\begin{array}{l} 3 \\ 4 \\ 1 \end{array}\right]=2 \cdot 3+(-1) 4+1 \cdot 1=3 \]
Outer product
- The outer product of \(\mathbf{u}\) and \(\mathbf{v}\) is the \(n \times n\) matrix \(\mathbf{u v}^{T}\).
. . .
Find the outer product of
\[ \mathbf{u}=\left[\begin{array}{r} 2 \\ -1 \\ 1 \end{array}\right] \text { and } \mathbf{v}=\left[\begin{array}{l} 3 \\ 4 \\ 1 \end{array}\right] \]
\[ \mathbf{u v}^{T}=\left[\begin{array}{r} 2 \\ -1 \\ 1 \end{array}\right][3,4,1]=\left[\begin{array}{rrr} 2 \cdot 3 & 2 \cdot 4 & 2 \cdot 1 \\ -1 \cdot 3 & -1 \cdot 4 & -1 \cdot 1 \\ 1 \cdot 3 & 1 \cdot 4 & 1 \cdot 1 \end{array}\right]=\left[\begin{array}{rrr} 6 & 8 & 2 \\ -3 & -4 & -1 \\ 3 & 4 & 1 \end{array}\right] \]
Determinants
The determinant of a square \(n \times n\) matrix \(A=\left[a_{i j}\right]\), \(\operatorname{det} A\), is defined recursively:
If \(n=1\) then \(\operatorname{det} A=a_{11}\);
otherwise,
- suppose we have determinents for all square matrices of size less than \(n\)
- Define \(M_{i j}(A)\) as the determinant of the \((n-1) \times(n-1)\) matrix obtained from \(A\) by deleting the \(i\) th row and \(j\) th column of \(A\)
then
\[ \begin{aligned} \operatorname{det} A & =\sum_{k=1}^{n} a_{k 1}(-1)^{k+1} M_{k 1}(A) \\ & =a_{11} M_{11}(A)-a_{21} M_{21}(A)+\cdots+(-1)^{n+1} a_{n 1} M_{n 1}(A) \end{aligned} \]
Laws of Determinants
Determinant of an upper-triangular matrix: \[ \begin{aligned} \operatorname{det} A & =\left|\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 n} \\ 0 & a_{22} & \cdots & a_{2 n} \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & a_{n n} \end{array}\right|=a_{11}\left|\begin{array}{cccc} a_{22} & a_{23} & \cdots & a_{2 n} \\ 0 & a_{33} & \cdots & a_{3 n} \\ \vdots & \vdots & & \vdots \\ 0 & 0 & \cdots & a_{n n} \end{array}\right| \\ & =\cdots=a_{11} \cdot a_{22} \cdots a_{n n} . \end{aligned} \]
. . .
D1: If \(A\) is an upper triangular matrix, then the determinant of \(A\) is the product of all the diagonal elements of \(A\).
More Laws of Determinants
D2: If \(B\) is obtained from \(A\) by multiplying one row of \(A\) by the scalar \(c\), then \(\operatorname{det} B=c \cdot \operatorname{det} A\).
D3: If \(B\) is obtained from \(A\) by interchanging two rows of \(A\), then \(\operatorname{det} B=\) \(-\operatorname{det} A\).
D4: If \(B\) is obtained from \(A\) by adding a multiple of one row of \(A\) to another row of \(A\), then \(\operatorname{det} B=\operatorname{det} A\).
- Note: this means that the determinant of a matrix is unchanged by elementary row operations.
Determinant Laws in terms of Elementary Matrices
- D2: \(\operatorname{det}\left(E_{i}(c) A\right)=c \cdot \operatorname{det} A\) (remember that for \(E_{i}(c)\) to be an elementary matrix, \(c \neq 0\) ).
- D3: \(\operatorname{det}\left(E_{i j} A\right)=-\operatorname{det} A\).
- D4: \(\operatorname{det}\left(E_{i j}(s) A\right)=\operatorname{det} A\).
Determinant of Row Echelon Form
Let R be the reduced row echelon form of A, obtained through multiplication by elementary matrices:
\[ R=E_{1} E_{2} \cdots E_{k} A . \]
. . .
Determinant of both sides:
\[ \operatorname{det} R=\operatorname{det}\left(E_{1} E_{2} \cdots E_{k} A\right)= \pm(\text { nonzero constant }) \cdot \operatorname{det} A \text {. } \]
. . .
Therefore, \(\operatorname{det} A=0\) precisely when \(\operatorname{det} R=0\).
. . .
- \(R\) is upper triangular, so \(\operatorname{det} R\) is the product of the diagonal entries of \(R\).
- If \(\operatorname{rank} A<n\), then there will be zeros in some of the diagonal entries, so \(\operatorname{det} R=0\).
- If \(\operatorname{rank} A=n\), the diagonal entries are all 1, so \(\operatorname{det} R=1\).
- A square matrix with rank \(n\) is invertible
. . .
Therefore,
D5: The matrix \(A\) is invertible if and only if \(\operatorname{det} A \neq 0\).
Two more Determinant Laws
D6: Given matrices \(A, B\) of the same size,
\[ \operatorname{det} A B=\operatorname{det} A \operatorname{det} B \text {. } \]
. . .
(but beware, \(\operatorname{det} A+\operatorname{det} B \neq \operatorname{det}(A+B)\))
D7: For all square matrices \(A\), \(\operatorname{det} A^{T}=\operatorname{det} A\)
Easier way to calculate determinant
Just use the det
function in Python! (That was copilot’s autocomplete…)
. . .
Use elementary row operations to get the matrix into upper triangular form, then multiply the diagonal entries.
An Inverse Formula
Adjoint, Minor, Cofactor Matrices
- \(M_{i j}(A)\), the \((i,j)\) th minor of \(A\), is the determinant of the \((n-1) \times (n-1)\) matrix obtained by deleting the \(i\)th row and \(j\)th column of \(A\).
- \(A_{i j}=(-1)^{i+j} M_{i j}(A)\) is the \((i,j)\)t h cofactor of \(A\).
- Matrix of minors \(M(A)\) is the matrix whose \((i,j)\) th entry is the minor \(M_{i j}(A)\).
- Cofactor matrix \(A_{\text {cof }}\) is the matrix whose \((i,j)\) th entry is the cofactor \(A_{i j}\).
- Adjoint of \(A\) is the transpose of the cofactor matrix, \(A^{*}=A_{\text {cof }}^{T}\).
Example
\(A=\left[\begin{array}{rrr}1 & 2 & 0 \\ 0 & 0 & -1 \\ 0 & 2 & 1\end{array}\right]\)
. . .
Matrix of minors:
. . .
Cofactor matrix and adjoint:
Overlay with the checkerboard \(\left[\begin{array}{l}+-+ \\ -+- \\ +-+\end{array}\right]\). Take the transpose, to get
\[ \operatorname{adj} A=\left[\begin{array}{rrr} 2 & -2 & -2 \\ 0 & 1 & 1 \\ 0 & -2 & 0 \end{array}\right] \]
Back to the Inverse Formula
\[ \begin{equation*} \sum_{k=1}^{n} A_{k i} a_{k j}=\delta_{i j} \operatorname{det} A \end{equation*} \]
Checking, with our example
Remeber, we had \(A=\left[\begin{array}{rrr}1 & 2 & 0 \\ 0 & 0 & -1 \\ 0 & 2 & 1\end{array}\right]\)
. . .
We had computed the adjoint as \(A^{*}=\left[\begin{array}{rrr}2 & -2 & -2 \\ 0 & 1 & 1 \\ 0 & -2 & 0\end{array}\right]\)
. . .
Check:
\[ A \operatorname{adj} A=\left[\begin{array}{rrr} 1 & 2 & 0 \\ 0 & 0 & -1 \\ 0 & 2 & 1 \end{array}\right]\left[\begin{array}{rrr} 2 & -2 & -2 \\ 0 & 1 & 1 \\ 0 & -2 & 0 \end{array}\right]=\left[\begin{array}{lll} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{array}\right]=(\operatorname{det} A) I_{3} . \]
. . .
So, \(A^{-1}=\frac{1}{\operatorname{det} A} A^{*}\).
Cramer’s Rule
Explicit formula for solving linear systems with a nonsingular coefficient matrix.
. . .
Solve \(A \mathbf{x}=\mathbf{b}\)
. . .
Multiply both sides by \(A^{-1}\): \(\mathbf{x}=A^{-1} \mathbf{b}\)
. . .
Use the formula for \(A^{-1}\): \(\mathbf{x}=\frac{1}{\operatorname{det} A} A^{*} \mathbf{b}\)
. . .
The \(i\) th component of \(\mathbf{x}\) is
\[ x_{i}=\frac{1}{\operatorname{det} A} \sum_{j=1}^{n} A_{j i} b_{j} \]
Interpretation of Cramer’s Rule
\[ x_{i}=\frac{1}{\operatorname{det} A} \sum_{j=1}^{n} A_{j i} b_{j} \]
** pause **
. . .
- Let \(A\) be an invertible \(n \times n\) matrix and \(\mathbf{b}\) an \(n \times 1\) column vector.
- Denote by \(B_{i}\) the matrix obtained from \(A\) by replacing the \(i\) th column of \(A\) by \(\mathbf{b}\).
- Then the linear system \(A \mathbf{x}=\mathbf{b}\) has unique solution \(\mathbf{x}=\left(x_{1}, x_{2}, \ldots, x_{n}\right)\), where
- \[ x_{i}=\frac{\operatorname{det} B_{i}}{\operatorname{det} A}, \quad i=1,2, \ldots, n \]
Summary of Laws of Determinants
Let \(A, B\) be \(n \times n\) matrices.
D1: If \(A\) is upper triangular, \(\operatorname{det} A\) is the product of all the diagonal elements of \(A\).
D2: \(\operatorname{det}\left(E_{i}(c) A\right)=c \cdot \operatorname{det} A\).
D3: \(\operatorname{det}\left(E_{i j} A\right)=-\operatorname{det} A\).
D4: \(\operatorname{det}\left(E_{i j}(s) A\right)=\operatorname{det} A\).
D5: The matrix \(A\) is invertible if and only if \(\operatorname{det} A \neq 0\).
D6: \(\operatorname{det} A B=\operatorname{det} A \operatorname{det} B\).
D7: \(\operatorname{det} A^{T}=\operatorname{det} A\).
D8: \(A \operatorname{adj} A=(\operatorname{adj} A) A=(\operatorname{det} A) I\).
D9: If \(\operatorname{det} A \neq 0\), then \(A^{-1}=\frac{1}{\operatorname{det} A} \operatorname{adj} A\).
Quadratic Forms
A quadratic form is a homogeneous polynomial of degree 2 in \(n\) variables. For example,
. . .
\[ Q(x, y, z)=x^{2}+2 y^{2}+z^{2}+2 x y+y z+3 x z . \]
. . .
We can express this in matrix form!
\[ \begin{aligned} x(x+2 y+3 z)+y(2 y+z)+z^{2} & =\left[\begin{array}{lll} x & y & z \end{array}\right]\left[\begin{array}{c} x+2 y+3 z \\ 2 y+z \\ z \end{array}\right] \end{aligned} \]
. . .
\[ \begin{aligned} =\left[\begin{array}{lll} x & y & z \end{array}\right]\left[\begin{array}{lll} 1 & 2 & 3 \\ 0 & 2 & 1 \\ 0 & 0 & 1 \end{array}\right]\left[\begin{array}{c} x \\ y \\ z \end{array}\right]=\mathbf{x}^{T} A \mathbf{x}, \end{aligned} \]