Ch3 Lecture 1

A bit about gas in a tube

Sec \(\backslash\) Meter 0 \(1 / 6\) \(1 / 3\) \(1 / 2\) \(2 / 3\) \(5 / 6\) 1
\(t=240\) 0.0 0.032 1.23 3.69 1.23 0.032 0.0
\(t=270\) 0.0 0.051 1.21 3.48 1.21 0.051 0.0

. . .

Goal: find D.

. . .

\[ \begin{equation*} y_{i, t+1}=y_{i, t}+\frac{k D}{h^{2}}\left(y_{i-1, t}-2 y_{i, t}+y_{i+1, t}\right)+k f_{i, t} \end{equation*} \]

. . .

\[ \begin{equation*} y_{i, t+1}=y_{i, t}+d\left(y_{i-1, t}-2 y_{i, t}+y_{i+1, t}\right) \end{equation*} \]

\(y_{0,t+1} = 0\) \(y_{1,t+1}=y_{1,t}+d\left(0-2 y_{1,t}+y_{2,t}\right)\) \(y_{3,t+1}=y_{3,t}+d\left(y_{2,t}-2 y_{3,t}+y_{4,t}\right)\)\(y_{6,t+1}=0\)

. . .

We want to solve these for \(y_{i,t}\) given \(y_{i,t+1}\).

\[ \begin{equation*} y_{i, t+1}-d\left(y_{i-1, t}-2 y_{i, t}+y_{i+1, t}\right)=y_{i, t} \end{equation*} \]

\[ \approx y_{i, t+1}-d\left(y_{i-1, t+1}-2 y_{i, t+1}+y_{i+1, t+1}\right) ? \]

In matrix form

In matrix form, this becomes \(y_{t+1}=y_{t}+ d My_{t}\):

\[ M=\left[\begin{array}{ccccccc} -2 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -2 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -2 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & -2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & -2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 1 & -2 \end{array}\right] \]

. . .

\[ y_{t}=y_{t+1}- d My_{t} \overset{?}\approx y_{t+1}- d My_{t+1} \]

Turns out it works… but why?

. . .

Using matrix inversion

  • \(y_{t+1}=y_{t}+ dMy_{t}\)
  • is the same as \(y_{t+1}=Iy_{t}+ dMy_{t}\)
  • in other words, \(y_{t+1}=(I+dM)y_{t}\)

. . .

How do we solve this for \(y_{t}\), given \(y_{t+1}\)?

. . .

\[ y_{t+1}=(I+dM)y_{t} \Rightarrow y_{t}=(I+dM)^{-1}y_{t+1} \]

. . .

Two different approaches – one correct, the other one gave approximately the right answer. Why?

\[ y_{t+1}=(I+dM)y_{t} \Rightarrow y_{t}=(I+dM)^{-1}y_{t+1} \]

\[ y_{t}=y_{t+1}- dMy_{t} \overset{?}\approx y_{t+1}- d My_{t+1} \]

. . .

Connection: we will use math to show that the two are approximately equal.

. . .

Remember Taylor series?

For small \(x\), we can approximate \(\frac{1}{1+x}\) as \(1-x+x^2-x^3+\ldots\)

. . .

If we have a matrix \(A\) such that \(X = I + A\) is invertible, then \(X^{-1} = I - A + A^2 - A^3 + \ldots\)

. . .

So to first order in d,

\[ y_{t}=(I+dM)^{-1}y_{t+1} \approx \left(I-dM\right) y_{t+1}=y_{t+1}-d M y_{t+1} \]

. . .

Now \(y_{t+1}=y_{t} + dM y_{t}\). Plugging this in,

\(y_{t}=(I+dM)^{-1}y_{t+1} \approx y_{t+1}-d M \left(y_{t}+d M y_{t}\right)\approx y_{t+1}-d M y_{t}\)

The point

The problem was much easier to solve once we put it in matrix form.

Independence and Bases

Linear Independence

The vectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \ldots, \mathbf{v}_{n}\) are linearly dependent if there exist scalars \(c_{1}, c_{2}, \ldots, c_{n}\), not all zero, such that

\[ \begin{equation*} c_{1} \mathbf{v}_{1}+c_{2} \mathbf{v}_{2}+\cdots+c_{n} \mathbf{v}_{n}=\mathbf{0} \end{equation*} \]

Otherwise, the vectors are called linearly independent.

Checking for Linear Independence

Is this set of vectors linearly independent?

\[ \left[\begin{array}{l}1 \\ 1 \\ 0\end{array}\right],\left[\begin{array}{l}0 \\ 1 \\ 1\end{array}\right],\left[\begin{array}{r}1 \\ -1 \\ -2\end{array}\right] \]

. . .

Goal: find some set of scalars \(c_{1}, c_{2}, c_{3}\), not all zero, such that \(c_{1} \mathbf{v}_{1}+c_{2} \mathbf{v}_{2}+c_{3} \mathbf{v}_{3}=\mathbf{0}\).

. . .

Define: - matrix \(A=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \mathbf{v}_{3}\right]\) - vector \(\mathbf{c}=\left(c_{1}, c_{2}, c_{3}\right)\).

. . .

Then:

\[ c_{1} \mathbf{v}_{1}+c_{2} \mathbf{v}_{2}+c_{3} \mathbf{v}_{3}=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \mathbf{v}_{3}\right]\left[\begin{array}{l} c_{1} \\ c_{2} \\ c_{3} \end{array}\right]=A \mathbf{c} \]

So we just need to see if there is a nontrivial solution to \(A \mathbf{c}=\mathbf{0}\).

Is this set of vectors linearly independent?

\[ \left[\begin{array}{l}1 \\ 1 \\ 0\end{array}\right],\left[\begin{array}{l}0 \\ 1 \\ 1\end{array}\right],\left[\begin{array}{r}1 \\ -1 \\ -2\end{array}\right] \]

Put in matrix form and find the reduced row echelon form: . . .

\[ \left[\begin{array}{rrr} 1 & 0 & 1 \\ 1 & 1 & -1 \\ 0 & 1 & -2 \end{array}\right] \xrightarrow[E_{21}(-1)]{\longrightarrow}\left[\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & -2 \\ 0 & 1 & -2 \end{array}\right] \xrightarrow[E_{32}(-1)]{\longrightarrow}\left[\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \end{array}\right], \]

. . .

\[ -1 \mathbf{v}_{1}+2 \mathbf{v}_{2}+1 \mathbf{v}_{3}=0 \]

So the vectors are linearly dependent.

Example 2

Are the following vectors linearly independent?

\(\left[\begin{array}{l}0 \\ 1 \\ 0\end{array}\right],\left[\begin{array}{l}1 \\ 1 \\ 0\end{array}\right],\left[\begin{array}{l}0 \\ 1 \\ 1\end{array}\right]\)

. . .

Set up A = \(\left[\begin{array}{lll}0 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1\end{array}\right]\)

. . .

Different approach: \[ \operatorname{det}\left[\begin{array}{lll} 0 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{array}\right]=-1 \operatorname{det}\left[\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right]=-1 \]

. . .

Determinant is non-zero, so \(A\) is invertible (non-singular)

. . .

Only solution to \(A \mathbf{c}=0\) is \(\mathbf{c}=0\).

. . .

So the vectors are linearly independent.

Summary

Summary: Checking for Linear Independence

  1. Set up matrix \(A\) with the vectors as columns.
  2. Either:
    • Find the reduced row echelon form of \(A\).
    • Calculate the determinant of \(A\).
    • Check if the only solution to \(A \mathbf{c}=0\) is \(\mathbf{c}=0\).

Basis of a Vector Space

A spanning set is a set of vectors that can be combined to create any vector in the vector space.

A basis for the vector space \(V\) is a spanning set of vectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \ldots, \mathbf{v}_{n}\) that is a linearly independent set.

Standard Basis

The standard basis for \(\mathbb{R}^{n}\) is the set of vectors \(\mathbf{e}_{1}, \mathbf{e}_{2}, \ldots, \mathbf{e}_{n}\) given by the columns of the \(n \times n\) identity matrix.

. . .

Showing that the Standard Basis is a Basis for the reals or complex numbers

  • Let \(\mathbf{v}=\left(c_{1}, c_{2}, \ldots, c_{n}\right)\) be a vector from \(V\)
  • \(c_{1}, c_{2}, \ldots, c_{n}\) are scalars (real or complex)

. . .

\[ \begin{aligned} \mathbf{v} & =\left[\begin{array}{c} c_{1} \\ c_{2} \\ \vdots \\ c_{n} \end{array}\right]=c_{1}\left[\begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array}\right]+c_{2}\left[\begin{array}{c} 0 \\ 1 \\ \vdots \\ 0 \end{array}\right]+\cdots+c_{n}\left[\begin{array}{c} 0 \\ \vdots \\ 0 \\ 1 \end{array}\right] \\ & =c_{1} \mathbf{e}_{1}+c_{2} \mathbf{e}_{2}+\cdots+c_{n} \mathbf{e}_{n} . \end{aligned} \]

Invertibility, Basis, and Linear Independence

An \(n \times n\) real matrix \(A\) is invertible if and only if its columns are linearly independent

. . .

in which case they form a basis of \(\mathbb{R}^{n}\)

Column and Row Spaces

The column space of the \(m \times n\) matrix \(A\) is the subspace \(\mathcal{C}(A)\) of \(\mathbb{R}^{m}\) spanned by the columns of \(A\).

. . .

The row space of the \(m \times n\) matrix \(A\) is the subspace \(\mathcal{R}(A)\) of \(\mathbb{R}^{n}\) spanned by the transposes of the rows of \(A\)

Null Space

The null space of the \(m \times n\) matrix \(A\) is the subset \(\mathcal{N}(A)\) of \(\mathbb{R}^{n}\)

\[ \mathcal{N}(A)=\left\{\mathbf{x} \in \mathbb{R}^{n} \mid A \mathbf{x}=\mathbf{0}\right\} \]

. . .

\(\mathcal{N}(A)\) is just the solution set to \(A \mathbf{x}=\mathbf{0}\)

. . .

For example, if \(A\) is invertible, \(A \mathbf{x}=\mathbf{0}\) has only the trivial solution \(\mathbf{x}=\mathbf{0}\)

. . .

so \(\mathcal{N}(A)\) is just \(\left\{\mathbf{0}\right\}\).

. . .

\(A\) is invertible exactly if \(\mathcal{N}(A)=\{\mathbf{0}\}\)

Kernel of an Operator

The kernel of the linear operator \(T\) : \(V \rightarrow W\) is the subspace of \(V\) defined by

\[ \operatorname{ker}(T)=\{\mathbf{x} \in V \mid T(\mathbf{x})=\mathbf{0}\} \]

For matrix operators kernels are the same thing as null spaces.

Example

Find the null space of the matrix

\[ A=\left[\begin{array}{rrrr} 1 & 1 & 1 & -1 \\ 0 & 1 & 2 & 1 \end{array}\right] \]

corresponding to the system of equations \[ \begin{aligned} x_{1}+x_{2}+x_{3}-x_{4} & =0 \\ x_{2}+2 x_{3}+x_{4} & =0 \end{aligned} \]

. . .

  1. Find the RREF of \(A\):

. . .

. . .

\[ \begin{aligned} & x_{1}=x_{3}+2 x_{4} \\ & x_{2}=-2 x_{3}-x_{4} . \end{aligned} \]

\[ \left[\begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array}\right]=\left[\begin{array}{c} x_{3}+2 x_{4} \\ -2 x_{3}-x_{4} \\ x_{3} \\ x_{4} \end{array}\right]=x_{3}\left[\begin{array}{r} 1 \\ -2 \\ 1 \\ 0 \end{array}\right]+x_{4}\left[\begin{array}{r} 2 \\ -1 \\ 0 \\ 1 \end{array}\right] . \]

. . .

The general solution to the homogeneous system is an arbitrary linear combination of these two vectors

. . .

\[ \mathcal{N}(A)=\operatorname{span}\left\{\left[\begin{array}{r} 1 \\ -2 \\ 1 \\ 0 \end{array}\right],\left[\begin{array}{r} 2 \\ -1 \\ 0 \\ 1 \end{array}\right]\right\} \subseteq \mathbb{R}^{4} \]

. . .

These are independent, so they form a basis for \(\mathcal{N}(A)\)

Using the Null Space

Suppose that a Markov chain has an stable transition matrix \(A=\left[\begin{array}{ll}0.7 & 0.4 \\ 0.3 & 0.6\end{array}\right]\), so that

\[ \mathbf{x}^{(k+1)}=A \mathbf{x}^{(k)} \]

. . .

What is the steady-state vector \(\mathbf{x}\)? (The limit as \(k \rightarrow \infty\) of \(\mathbf{x}^{(k)}\))

. . .

Take limit of both sides:

\[ \mathbf{x}=A \mathbf{x} \]

. . .

\[ \mathbf{0}=\mathbf{x}-A \mathbf{x}=I \mathbf{x}-A \mathbf{x}=(I-A) \mathbf{x} \]

So \(\mathbf{x}\) is in the null space of \(I-A\).

\[ I-A=\left[\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right]-\left[\begin{array}{ll} 0.7 & 0.4 \\ 0.3 & 0.6 \end{array}\right]=\left[\begin{array}{rr} 0.3 & -0.4 \\ -0.3 & 0.4 \end{array}\right] \text {. } \]

. . .

\[ \left[\begin{array}{rr} 0.3 & -0.4 \\ -0.3 & 0.4 \end{array}\right] \xrightarrow{E_{21}(1)}\left[\begin{array}{rr} 1 & -4 / 3 \\ 0 & 0 \end{array}\right] \]

Solution set is \(x_{1}=4 x_{2} / 3\)

. . .

\[ (3 / 7)(4 / 3,1)=(4 / 7,3 / 7) \approx(0.57143,0.42857) \]

Using the Null Space to find a basis

Find a basis for the column space of the matrix

\[ A=\left[\begin{array}{rrrr} 1 & 1 & 1 & -1 \\ 0 & 1 & 2 & 1 \end{array}\right] \]

. . .

From before, we found the RREF and used this to find a general form of the solutions to \(A \mathbf{x}=\mathbf{0}\)

\(\mathbf{x}=\left(x_{1}, x_{2}, x_{3}, x_{4}\right)=\) \(\left(x_{3}+2 x_{4},-2 x_{3}-x_{4}, x_{3}, x_{4}\right)\)

. . .

Write \(A=\left[\mathbf{a}_{1}, \mathbf{a}_{2}, \mathbf{a}_{3}, \mathbf{a}_{4}\right]\)

. . .

\(\mathbf{0}=x_{1} \mathbf{a}_{1}+x_{2} \mathbf{a}_{2}+x_{3} \mathbf{a}_{3}+x_{4} \mathbf{a}_{4}\)

. . .

\(\mathbf{0}=\left(x_{3}+2 x_{4}\right) \mathbf{a}_{1}-\left(2 x_{3}+x_{4}\right) \mathbf{a}_{2}+x_{3} \mathbf{a}_{3}+x_{4} \mathbf{a}_{4}\)

. . .

Take \(x_{3}=1\) and \(x_{4}=0\). Then \(\mathbf{0}=\mathbf{a}_{1}-2 \mathbf{a}_{2}+\mathbf{a}_{3}\)

. . .

So \(\mathbf{a}_{3}\) is a linear combination of \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\)

. . .

Similarly, take \(x_{3}=0\) and \(x_{4}=1\). Then \(\mathbf{0}=2 \mathbf{a}_{1}-\mathbf{a}_{2}+\mathbf{a}_{4}\) so \(\mathbf{a}_{4}\) is a linear combination of \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\)

. . .

So \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) are a basis for the column space of \(A\)

\(\mathcal{C}(A)=\operatorname{span}\left\{\mathbf{a}_{1}, \mathbf{a}_{2}\right\}=\operatorname{span}\left\{\left[\begin{array}{l}1 \\ 0\end{array}\right],\left[\begin{array}{l}1 \\ 1\end{array}\right]\right\}\)

Consistency and Column Space: Back to Linear Systems

The linear system \(A \mathbf{x}=\mathbf{b}\) of \(m\) equations in \(n\) unknowns is consistent if and only if \(\mathbf{b} \in \mathcal{C}(A)\).

  • \(A \mathbf{x}\) is a linear combination of the columns of \(A\) with the entries of \(\mathbf{x}\) as scalar coefficients
  • If \(A \mathbf{x}=\mathbf{b}\) has a solution means some linear combination of columns of \(A\) adds up to \(\mathbf{b}\)
  • \(\mathbf{b} \in \mathcal{C}(A)\)

How to tell if a vector is in a spanning space

Suppose we have a space \(V\) spanned by \(\mathbf{v}_{1}=(1,1,3,3), \mathbf{v}_{2}=(0,2,2,4)\), and and \(\mathbf{v}_{3}=(1,0,2,1)\)

. . .

One of these vectors is in \(V\). Which one?

\(\mathbf{u}=(2,1,5,4)\) and \(\mathbf{w}=(1,0,0,0)\)

. . .

Define \(A=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \mathbf{v}_{3}\right]\)

Then \(\mathbf{u} \in \mathcal{C}(A)\) if and only if \(A \mathbf{x}=\mathbf{u}\) has a solution (is consistent)

. . .

Solve both at once: \[ [A|\mathbf{u}| \mathbf{w}]=\left[\begin{array}{lllll} 1 & 0 & 1 & 2 & 1 \\ 1 & 2 & 0 & 1 & 0 \\ 3 & 2 & 2 & 5 & 0 \\ 3 & 4 & 1 & 4 & 0 \end{array}\right] \text { with reduced row echelon form }\left[\begin{array}{rrrrr} 1 & 0 & 1 & 2 & 0 \\ 0 & 1 & -\frac{1}{2} & -\frac{1}{2} & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right] \]

. . .

\(\mathbf{u} \in \operatorname{span}\left\{\mathbf{v}_{1}, \mathbf{v}_{2}, \mathbf{v}_{3}\right\}\),

but \(\mathbf{w} \notin \operatorname{span}\left\{\mathbf{v}_{1}, \mathbf{v}_{2}, \mathbf{v}_{3}\right\}\)

. . .

\[ \mathbf{u}=\left(2-c_{3}\right) \mathbf{v}_{1}+\frac{1}{2}\left(c_{3}-1\right) \mathbf{v}_{2}+c_{3} \mathbf{v}_{3} \]

Form of general solution to a linear system

  • Suppose the system \(A \mathbf{x}=\mathbf{b}\) has a particular solution \(\mathbf{x}_{*}\).

  • Then the general solution \(\mathbf{x}\) to this system can be described by the equation

\[ \mathbf{x}=\mathbf{x}_{*}+\mathbf{z} \]

where \(\mathbf{z}\) runs over all elements of \(\mathcal{N}(A)\).

. . .

Solution set is the set of all translates of elements in the null space of \(A\) by some fixed vector.

. . .

When n is 2 or 3 this says that the solution set is either a single point, a line or a plane—not necessarily through the origin!

Example

Describe the solution sets to the system \[ \begin{array}{r} x+2 y=3 \\ x+y+t=3 . \end{array} \]

. . .

. . .

\[ \begin{aligned} & x=3-2 t \\ & y=t \\ & z=t . \end{aligned} \]

Null Space algorithm

Given an \(m \times n\) matrix \(A\).

  1. Compute the reduced row echelon form \(R\) of \(A\).

  2. Use \(R\) to find the general solution to the homogeneous system \(A \mathbf{x}=0\).

. . .

  1. Write the general solution \(\mathbf{x}=\left(x_{1}, x_{2}, \ldots, x_{n}\right)\) to the homogeneous system in the form

\[ \mathbf{x}=x_{i_{1}} \mathbf{w}_{1}+x_{i_{2}} \mathbf{w}_{2}+\cdots+x_{i_{n-r}} \mathbf{w}_{n-r} \]

where \(x_{i_{1}}, x_{i_{2}}, \ldots, x_{i_{n-r}}\) are the free variables.

. . .

  1. List the vectors \(\mathbf{w}_{1}, \mathbf{w}_{2}, \ldots, \mathbf{w}_{n-r}\). These form a basis of \(\mathcal{N}(A)\).