Getting to row echelon form: \[ \left[\begin{array}{rrr} 1 & 1 & 5 \\ 0 & -3 & -9 \end{array}\right] \overrightarrow{E_{2}(-1 / 3)}\left[\begin{array}{lll} 1 & 1 & 5 \\ 0 & 1 & 3 \end{array}\right] \overrightarrow{E_{12}(-1)}\left[\begin{array}{lll} 1 & 0 & 2 \\ 0 & 1 & 3 \end{array}\right] . \]
\[ \begin{aligned} z & =2 \\ x+y+z & =2 \\ 2 x+2 y+4 z & =8 \end{aligned} \]
Augmented matrix:
\[ \left[\begin{array}{llll} 0 & 0 & 1 & 2 \\ 1 & 1 & 1 & 2 \\ 2 & 2 & 4 & 8 \end{array}\right] \stackrel{E_{12}}{\longrightarrow}\left[\begin{array}{llll} (1 & 1 & 1 & 2 \\ 0 & 0 & 1 & 2 \\ 2 & 2 & 4 & 8 \end{array}\right] \xrightarrow[E_{31}(-2)]{\longrightarrow}\left[\begin{array}{rlll} 1 & 1 & 1 & 2 \\ 0 & 0 & 2 & 4 \\ 0 & 0 & 1 & 2 \end{array}\right] \]
We keep on going… \[ \begin{aligned} & {\left[\begin{array}{rrrr} (1) & 1 & 1 & 2 \\ 0 & 0 & 2 & 4 \\ 0 & 0 & 1 & 2 \end{array}\right] \xrightarrow[E_{2}(1 / 2)]{\longrightarrow}\left[\begin{array}{rrrr} 1 & 1 & 1 & 2 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 1 & 2 \end{array}\right]} \\ & \overrightarrow{E_{32}(-1)}\left[\begin{array}{rrrr} (1) & 1 & 1 & 2 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{array}\right] \xrightarrow[E_{12}(-1)]{\longrightarrow}\left[\begin{array}{rrrr} (1) & 1 & 0 & 0 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{array}\right] . \end{aligned} \]
There’s still no information on \(y\).
\[ \begin{aligned} & x=-y \\ & z=2 \\ & y \text { is free. } \end{aligned} \]
To rotate a vector by \(\theta\):
\[ =\left[\begin{array}{rr} \cos \theta &-\sin \theta \\ \sin \theta & \cos \theta \end{array}\right]\left[\begin{array}{l} r \cos \phi \\ r \sin \phi \end{array}\right]=\left[\begin{array}{rr} \cos \theta&-\sin \theta \\ \sin \theta & \cos \theta \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right] \]
Scaling:
\[ A = \left[\begin{array}{ll} z_1 & 0 \\ 0 & z_2 \end{array}\right] \]
Shearing: adding a constant shear factor times one coordinate to another coordinate of the point. \[ A = \left[\begin{array}{ll} 1 & s_2 \\ s_1 & 1 \end{array}\right] \]
Adjacency matrix: A square matrix whose \((i, j)\) th entry is the number of edges going from vertex \(i\) to vertex \(j\)
Incidence matrix: A matrix whose rows correspond to vertices and columns correspond to edges. The \((i, j)\) th entry is 1 if vertex \(i\) is the tail of edge \(j\), -1 if it is the head, and 0 otherwise.
A distribution vector is a vector whose entries are nonnegative and sum to 1.
A stochastic matrix is a square matrix whose columns are distribution vectors.
A Markov chain is a discrete dynamical system whose initial state \(\mathbf{x}^{(0)}\) is a distribution vector and whose transition matrix \(A\) is stochastic, i.e., each column of \(A\) is a distribution vector.
From HW2: put ths difference equation in matrix form:
\[ y_{k+2}-y_{k+1}-y_{k}=0 \]
Steps: 1. Make two equations. Solve for \(y_{k+2}\): \(y_{k+2}=y_{k+1}+y_{k}\). Also of course \(y_{k+1}=y_{k+1}\). 2. Define the vector \(\mathbf{y}^{(k)}=\left[\begin{array}{l} y_{k} \\ y_{k+1} \end{array}\right]\) 3. Put the two equations in matrix form: . . .
\(\left[\begin{array}{l}y_{k+1} \\ y_{k+2}\end{array}\right]=\left[\begin{array}{ll}0 & 1 \\ 1 & 1\end{array}\right]\left[\begin{array}{c}y_{k} \\ y_{k+1}\end{array}\right]\)
Reaction-diffusion:
\[ a_{x,t+1} = a_{x,t} + dt\left( \frac{D_{a}}{dx^{2}}(a_{x+1,t} + a_{x-1,t} - 2a_{x,t}) \right) \]
Heat in a rod: \[ \begin{equation*} -y_{i-1}+2 y_{i}-y_{i+1}=\frac{h^{2}}{K} f\left(x_{i}\right) \end{equation*} \]
The determinant of a square \(n \times n\) matrix \(A=\left[a_{i j}\right]\), \(\operatorname{det} A\), is defined recursively:
If \(n=1\) then \(\operatorname{det} A=a_{11}\);
otherwise,
then
\[ \begin{aligned} \operatorname{det} A & =\sum_{k=1}^{n} a_{k 1}(-1)^{k+1} M_{k 1}(A) \\ & =a_{11} M_{11}(A)-a_{21} M_{21}(A)+\cdots+(-1)^{n+1} a_{n 1} M_{n 1}(A) \end{aligned} \]
The column space of the \(m \times n\) matrix \(A\) is the subspace \(\mathcal{C}(A)\) of \(\mathbb{R}^{m}\) spanned by the columns of \(A\).
The row space of the \(m \times n\) matrix \(A\) is the subspace \(\mathcal{R}(A)\) of \(\mathbb{R}^{n}\) spanned by the transposes of the rows of \(A\)
A basis for the column space of \(A\) is the set of pivot columns of \(A\). (Find these by row reducing \(A\) and choosing the columns with leading 1s)
The null space of the \(m \times n\) matrix \(A\) is the subset \(\mathcal{N}(A)\) of \(\mathbb{R}^{n}\)
\[ \mathcal{N}(A)=\left\{\mathbf{x} \in \mathbb{R}^{n} \mid A \mathbf{x}=\mathbf{0}\right\} \]
\(\mathcal{N}(A)\) is just the solution set to \(A \mathbf{x}=\mathbf{0}\)
For example, if \(A\) is invertible, \(A \mathbf{x}=\mathbf{0}\) has only the trivial solution \(\mathbf{x}=\mathbf{0}\)
so \(\mathcal{N}(A)\) is just \(\left\{\mathbf{0}\right\}\).
\(A\) is invertible exactly if \(\mathcal{N}(A)=\{\mathbf{0}\}\)
Given an \(m \times n\) matrix \(A\).
Compute the reduced row echelon form \(R\) of \(A\).
Use \(R\) to find the general solution to the homogeneous system \(A \mathbf{x}=0\).
\[ \mathbf{x}=x_{i_{1}} \mathbf{w}_{1}+x_{i_{2}} \mathbf{w}_{2}+\cdots+x_{i_{n-r}} \mathbf{w}_{n-r} \]
where \(x_{i_{1}}, x_{i_{2}}, \ldots, x_{i_{n-r}}\) are the free variables.
To find the least squares solution to \(A \mathbf{x}=\mathbf{b}\), we minimize the squared error \(\left\|A \mathbf{x}-\mathbf{b}\right\|^{2}\) by solving the Normal Equations for \(\mathbf{x}\): \[ \begin{aligned} \mathbf{A}^{T} \mathbf{A} \mathbf{x} &=\mathbf{A}^{T} \mathbf{b} \end{aligned} \]
If \(A\) is an \(m \times n\) full-column-rank matrix, then \(A=Q R\), where the columns of the \(m \times n\) matrix \(Q\) are orthonormal vectors and the \(n \times n\) matrix \(R\) is upper triangular with nonzero diagonal entries.
\[ A=\left[\mathbf{w}_{1}, \mathbf{w}_{2}, \mathbf{w}_{3}\right]=\left[\mathbf{q}_{1}, \mathbf{q}_{2}, \mathbf{q}_{3}\right]\left[\begin{array}{ccc} 1 & \frac{\mathbf{q}_{1} \cdot \mathbf{w}_{2}}{\mathbf{q}_{1} \cdot \mathbf{q}_{1}} & \frac{\mathbf{q}_{1} \cdot \mathbf{w}_{3}}{\mathbf{q}_{1} \cdot \mathbf{q}_{1}} \\ 0 & 1 & \frac{\mathbf{q}_{2} \cdot \mathbf{w}_{3}}{\mathbf{q}_{2} \cdot \mathbf{q}_{2}} \\ 0 & 0 & 1 \end{array}\right] \]
If \(A\) is a square \(n \times n\) matrix, the equation \(\operatorname{det}(\lambda I-A)=0\) is called the characteristic equation of \(A\)
The eigenvalues of \(A\) are the roots of the characteristic equation.
For each scalar \(\lambda\) in (1), use the null space algorithm to find a basis of the eigenspace \(\mathcal{N}(\lambda I-A)\).
The pseudoinverse of \(A\) is \(A^{+}=V S^+ U^{T}\)
\(S^{+}\) is the matrix with the reciprocals of the non-zero singular values on the diagonal, and zeros elsewhere.
Can find least squares solutions:
\[ \begin{aligned} A x & =b \\ x & \approx A^{+} b \end{aligned} \]