Homework 3

1

Show that if \(P\) and \(Q\) are stochastic matrices of the same size, then \(P Q\) is also stochastic.

2.4.29 Block \(Q\) into columns \(\mathbf{q}_{i}\), each of which is a probability distribution vector, and obtain

\[ P Q=P\left[\mathbf{q}_{1}, \mathbf{q}_{2}, \ldots, \mathbf{q}_{n}\right]=\left[P \mathbf{q}_{1}, P \mathbf{q}_{2}, \ldots, P \mathbf{q}_{n}\right] \]

Since any product of a stochastic matrix and probability distribution vector is itself a probability distribution vector, the result follows.

2

The digraph \(H\) that results from reversing all the arrows in a digraph \(G\) is called reverse digraph of \(G\). Show that if \(A\) is the adjacency matrix for \(G\) then \(A^{T}\) is the adjacency matrix for the reverse digraph \(H\).

2.4.36

In terms of the edge set \(E=\left\{\left(\left(v_{1}, w_{1}\right),\left(v_{2}, w_{2}\right), \ldots,\left(v_{m}, w_{m}\right)\right)\right\}\) of the digraph \(G\), the edge set of its reverse digraph \(H\) is

\[ F=\left\{\left(\left(w_{1}, v_{1}\right),\left(w_{2}, v_{2}\right), \ldots,\left(w_{m}, v_{m}\right)\right)\right\} \]

which means that whenever the edge \(\left(v_{k}, w_{k}\right)\) contributes 1 to the \((i, j)\) th entry of the adjacency matrix \(A\) of \(G\), the edge \(\left(w_{k}, v_{k}\right)\) contributes 1 to the \((j, i)\) th entry of the adjacence matrix \(B\) of \(H\). Hence, \(B=A^{T}\).

3

Solve the PageRank problem with \(P\) as in Example 2.46, teleportation vector \(\mathbf{v}=\frac{1}{6} \mathbf{e}\) and teleportation parameter \(\alpha=0.8\). (In this example, the correction vector was \(\frac{1}{3}(1,1,1,0,0,0)\); that’s what you’ll use here.)

2.5.21

Solution vector is \(\mathbf{x}=(95,133,189,75,123,123) / 738\) exactly, \(\mathbf{x}=\) \((0.129,0.180,0.256,0.102,0.167,0.167)\) approximately.

from sympy import Matrix, Rational,N, nsimplify
P = Matrix([
    [0, 0, 1/3, 1/3, 0, 0],
    [1/2, 0, 1/3, 1/3, 0, 0],
    [1/2, 1, 0, 1/3, 0, 0],
    [0, 0, 1/3, 0, 0, 0],
    [0, 0, 0, 0, 0, 1],
    [0, 0, 0, 0, 1, 0]
])
P=nsimplify(P,rational=True)
v = Matrix([Rational(1,6)] * 6)
alpha = Rational(8,10)

We start with our surfing matrix \[ P=\left[\begin{array}{llllll} 0 & 0 & \frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{2} & 0 & \frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{2} & 1 & 0 & \frac{1}{3} & 0 & 0 \\ 0 & 0 & \frac{1}{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right] \]

We want to solve this:

\[ \begin{equation*} \left(\alpha P+(1-\alpha) \mathbf{v e}^{T}\right) \mathbf{x} =\mathbf{x} \end{equation*} \]

One approach: find eigenvalues of value 1.

M_sympy = alpha * P + (1 - alpha) * v * Matrix([1]*P.shape[0]).T
eigenvalues_sympy = M_sympy.eigenvals()
eigenvectors_sympy = M_sympy.eigenvects()
# Find the first eigenvector whose eigenvalue equals 1
for eigenvalue, multiplicity, eigenvectors in eigenvectors_sympy:
    if eigenvalue == 1:
        ev = eigenvectors[0]
        break
# Normalize the eigenvector
ev = ev / sum(ev)
ev.T

\(\displaystyle \left[\begin{matrix}\frac{95}{738} & \frac{133}{738} & \frac{21}{82} & \frac{25}{246} & \frac{1}{6} & \frac{1}{6}\end{matrix}\right]\)

Second approach: subtract x from both sides of the equation and solve it using sympy’s solve function.

Now we want to solve this:

\[ \begin{equation*} \left(\alpha P+(1-\alpha) \mathbf{v e}^{T}\right) \mathbf{x}-\mathbf{x} =0 \end{equation*} \]

Initially, sympy complains that the matrix is not invertible. This means that there will be infinitely many solutions. We can use gauss_jordan_solve to find the general form…

(Also note: if we don’t define \(\alpha\) using symbols, sympy will make everything numeric and only find the trivial solution.)

from sympy import symbols, solve, Matrix, zeros, eye

x = Matrix(symbols('x:6'))

# Define the equation
eqn = (alpha*P + (1-alpha)*v*Matrix([1]*P.shape[0]).T)-eye(6)
soln=eqn.gauss_jordan_solve(zeros(6,1))[0]
display(soln.T)

\(\displaystyle \left[\begin{matrix}\frac{95 \tau_{0}}{123} & \frac{133 \tau_{0}}{123} & \frac{63 \tau_{0}}{41} & \frac{25 \tau_{0}}{41} & \tau_{0} & \tau_{0}\end{matrix}\right]\)

Next, we would need to pick a value for \(\tau_0\) to find a specific solution. We want the sum of the elements of \(\mathbf{x}\) to be 1; conveniently, if we simply divide by the sum of the elements of the general solution, the \(\tau_0\) will cancel out.

from sympy import simplify, collect, together
from IPython.display import Latex
import sympy as sp
from sympy.printing.latex import LatexPrinter
sp.init_printing()

display(Latex("$$\\frac{1}{738}"+sp.latex(simplify(soln/sum(soln)*738)[:,0])+"\\approx"+sp.latex(N(soln/sum(soln),4))+"$$"))

\[\frac{1}{738}\left[\begin{matrix}95\\133\\189\\75\\123\\123\end{matrix}\right]\approx\left[\begin{matrix}0.1287\\0.1802\\0.2561\\0.1016\\0.1667\\0.1667\end{matrix}\right]\]

4

Modify the surfing matrix \(P\) of Example 2.46 by using the correction vector \(\frac{1}{5}(1,1,1,0,1,1)\) and solve the resulting PageRank problem with teleportation vector \(\mathbf{v}=\frac{1}{6} \mathbf{e}\) and teleportation parameter \(\alpha=0.8\).

2.5.22

Solution vector is \(\mathbf{x}=(95,133,189,75,123,123) / 738\) or \(\mathbf{x}=\) \((0.101,0.141,0.200,0.087,0.236,0.236)\) exactly.

5

Show that there is more than one stationary state for the Markov chain of Example 2.46.

2.5.29

Solution. The reduced row echelon form for \(I-P\) is

Two solutions whose coordinates sum to one are \(\mathbf{x}=\left(0,0,0,0, \frac{1}{2}, \frac{1}{2}\right)\) and \[\mathbf{x}=\frac{1}{22}\begin{bmatrix}4\\6\\9\\3\\0\\0\end{bmatrix}\approx \left[\begin{matrix}0.182\\0.273\\0.409\\0.136\\0\\0\end{matrix}\right]' \] .

6

Repair the dangling node problem of the graph of Figure 2.7 by creating a correction vector that makes transition to all nodes equally likely. (Note that this means all nodes, includes transitioning back to the dangling node.)

Next, find all stationary states for the resulting Markov chain.

2.5.30

The matrix \(P\) will be as in Example 2.46 except that the fourth column will have the value \(1 / 6\) in each entry. The reduced row echelon form for \(I-P\) is

\[ \left[\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right] . \]

The only solution whose coordinates sum to one is \(\mathbf{x}=\left(0,0,0,0, \frac{1}{2}, \frac{1}{2}\right)\).

If we want to do this numerically:

\[ Q=\left[\begin{array}{llllll} 0 & 0 & \frac{1}{3} & 0 & 0 & 0 \\ \frac{1}{2} & 0 & \frac{1}{3} & 0 & 0 & 0 \\ \frac{1}{2} & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{1}{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right], \mathbf{x}=\left(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}\right) \]

import sympy as sp
from sympy import symbols, solve, Matrix, zeros, eye,nsimplify
x = Matrix(symbols('x:6'))
Q = nsimplify(Matrix([
    [0, 0, 1/3, 1/6, 0, 0],
    [1/2, 0, 1/3, 1/6, 0, 0],
    [1/2, 1, 0, 1/6, 0, 0],
    [0, 0, 1/3, 1/6, 0, 0],
    [0, 0, 0, 1/6, 0, 1],
    [0, 0, 0, 1/6, 1, 0]
]), rational=True)

display(Q)
Qbig = Q**10000

# make a random starting vector using sympys random number generator
v = sp.randMatrix(6,1)
v = N(v/sum(v))
display(Qbig*v)

\(\displaystyle \left[\begin{matrix}0 & 0 & \frac{1}{3} & \frac{1}{6} & 0 & 0\\\frac{1}{2} & 0 & \frac{1}{3} & \frac{1}{6} & 0 & 0\\\frac{1}{2} & 1 & 0 & \frac{1}{6} & 0 & 0\\0 & 0 & \frac{1}{3} & \frac{1}{6} & 0 & 0\\0 & 0 & 0 & \frac{1}{6} & 0 & 1\\0 & 0 & 0 & \frac{1}{6} & 1 & 0\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}5.0738639483025 \cdot 10^{-256}\\7.76382144414418 \cdot 10^{-256}\\1.18187383695331 \cdot 10^{-255}\\5.0738639483025 \cdot 10^{-256}\\0.596428571428571\\0.403571428571429\end{matrix}\right]\)

# make a random starting vector using sympys random number generator
v = sp.randMatrix(6,1)
v = N(v/sum(v))
Qbig*Qbig*v

# repeat the above step with many initial vectors and make a histogram of the 5th entries
import matplotlib.pyplot as plt
import numpy as np

results = []
for i in range(100):
    v = sp.randMatrix(6,1)
    v = N(v/sum(v))
    results.append((Qbig*v)[4,0])
pr = np.array(results,dtype=float)
plt.hist(pr, bins=20 )
(array([ 3.,  2.,  0.,  4.,  1.,  3.,  5.,  7.,  6., 10.,  7., 10.,  4.,
         8.,  1., 10.,  6.,  6.,  3.,  4.]),
 array([0.33221477, 0.34690403, 0.36159329, 0.37628255, 0.39097181,
        0.40566107, 0.42035034, 0.4350396 , 0.44972886, 0.46441812,
        0.47910738, 0.49379664, 0.50848591, 0.52317517, 0.53786443,
        0.55255369, 0.56724295, 0.58193221, 0.59662148, 0.61131074,
        0.626     ]),
 <BarContainer object of 20 artists>)

We see that the solutions appear to converge, but very slowly!

7

Solve the nonlinear system of equations of Example 2.48 by using nine iterations of the vector Newton formula (2.5), starting with the initial guess \(\mathbf{x}^{(0)}=(0,1)\). Evaluate \(F\left(\mathbf{x}^{(9)}\right)\).

2.5.25 \(\quad \mathbf{x}=(x, y), \mathbf{x}^{(9)} \approx\left[\begin{array}{r}1.00001 \\ -0.99999\end{array}\right], \mathbf{F}\left(\mathbf{x}^{(9)}\right) \approx 10^{-6}\left[\begin{array}{r}-1.3422 \\ 2.0226\end{array}\right], \mathbf{F}(\mathbf{x})=\)

\(\left[\begin{array}{l}x^{2}+\sin (\pi x y)-1 \\ x+y^{2}+e^{x+y}-3\end{array}\right], J_{\mathbf{F}}(\mathbf{x})=\left[\begin{array}{cc}2 x+\pi y \cos (\pi x y) & \pi x\cos (\pi x y) \\ 1+e^{x+y} & 2 y+e^{x+y}\end{array}\right]\)

Set this up in Python and solve it:

from IPython.display import display, Markdown
from sympy import Matrix, I, latex

def printmult(lst):
    output = ""
    for l in lst:
        if isinstance(l, str):
            output += f"{l}"
        else:
            output += f"${{{latex(l)}}}$ "

    display(Markdown(output))
from sympy import symbols, sin, pi, exp, cos, Matrix, N,Eq,init_printing
init_printing()
x, y = symbols('x y')
F = Matrix([x**2 + sin(pi*x*y) - 1, x + y**2 + exp(x + y) - 3])
J = F.jacobian([x, y])
display(J)
x0=Matrix([0,1])
for i in range(9):
    x0 = N(x0 - J.subs({x: x0[0], y: x0[1]}).inv() * F.subs({x: x0[0], y: x0[1]}))
printmult(["x=",N(x0,5)])
printmult(["F(x)=",N(F.subs({x: x0[0], y: x0[1]}),5)])

\(\displaystyle \left[\begin{matrix}2 x + \pi y \cos{\left(\pi x y \right)} & \pi x \cos{\left(\pi x y \right)}\\e^{x + y} + 1 & 2 y + e^{x + y}\end{matrix}\right]\)

x=\({\left[\begin{matrix}1.0\\-0.99999\end{matrix}\right]}\)

F(x)=\({\left[\begin{matrix}-1.3422 \cdot 10^{-6}\\2.0226 \cdot 10^{-6}\end{matrix}\right]}\)

printmult(["F(x)=",N(F.subs({x: x0[0], y: x0[1]}),5)])
#print(F,N(F.subs({x: x0[0], y: x0[1]}),5))

F(x)=\({\left[\begin{matrix}-1.3422 \cdot 10^{-6}\\2.0226 \cdot 10^{-6}\end{matrix}\right]}\)

8

Find the minimum value of the function \(F(x, y)=\left(x^{2}+y+1\right)^{2}+x^{4}+y^{4}\) by using the Newton method to find critical points of the function \(F(x, y)\), i.e., points where \(f(x, y)=F_{x}(x, y)=0\) and \(g(x, y)=F_{y}(x, y)=0\).

2.5.26

from sympy import symbols, sin, pi, exp, cos, Matrix, N,Eq,init_printing
init_printing()
x, y = symbols('x y')
fxy = Matrix([(x**2 + y + 1)**2 + x**4 + y**4])
# find the derivative of F with respect to x and y
fx = fxy.jacobian([x])
fy  = fxy.jacobian([y])
F = Matrix([fx,fy])
J = F.jacobian([x, y])
printmult(["F(x,y)=",fxy])
printmult(["F=",F])
printmult(["J=",J])
print("After 9 iterations, the solution is:")
x0=Matrix([.1,.1])
for i in range(9):
    x0 = N(x0 - N(J.subs({x: x0[0], y: x0[1]})).inv() * F.subs({x: x0[0], y: x0[1]}))
printmult(["x=",N(x0,5)])
printmult(["f(x,y)=",N(fxy.subs({x: x0[0], y: x0[1]}),5)])

F(x,y)=\({\left[\begin{matrix}x^{4} + y^{4} + \left(x^{2} + y + 1\right)^{2}\end{matrix}\right]}\)

F=\({\left[\begin{matrix}4 x^{3} + 4 x \left(x^{2} + y + 1\right)\\2 x^{2} + 4 y^{3} + 2 y + 2\end{matrix}\right]}\)

J=\({\left[\begin{matrix}24 x^{2} + 4 y + 4 & 4 x\\4 x & 12 y^{2} + 2\end{matrix}\right]}\)

After 9 iterations, the solution is:

x=\({\left[\begin{matrix}0\\-0.58975\end{matrix}\right]}\)

f(x,y)=\({\left[\begin{matrix}0.28927\end{matrix}\right]}\)

9

Apply the following digital filter to the noisy data of Example 2.71 and graph the results. Does it appear to be a low pass filter?

\[ y_{k}=\frac{1}{2} x_{k}+\frac{1}{2} x_{k-1}, \quad k=1,2, \ldots, 33 \]

2.8.9

Yes, but rather mediocrely. A graph is in Figure 1 and comparison of exact and filtered in Figure 2:

10

Apply the following digital filter to the noisy data of Example 2.71 and graph the results. Does it appear to be a high pass filter?

\[ y_{k}=\frac{1}{2} x_{k}-\frac{1}{2} x_{k-1}, \quad k=1,2, \ldots, 33 \]

2.8.10

Yes, definitely high pass. A graph is in Figure 3:

11

Show that \(L=\left[\begin{array}{lll}1 & 0 & 0 \\ 1 & 1 & 0 \\ 2 & 1 & 1\end{array}\right]\) and \(U=\left[\begin{array}{rrr}2 & -1 & 1 \\ 0 & 4 & -3 \\ 0 & 0 & -1\end{array}\right]\) is an LU factorization of \(A=\left[\begin{array}{rrr}2 & -1 & 1 \\ 2 & 3 & -2 \\ 4 & 2 & -2\end{array}\right]\).

2.8.1

\(L U=\left[\begin{array}{lll}1 & 0 & 0 \\ 1 & 1 & 0 \\ 2 & 1 & 1\end{array}\right]\left[\begin{array}{rrr}2 & -1 & 1 \\ 0 & 4 & -3 \\ 0 & 0 & -1\end{array}\right]=A=\left[\begin{array}{rrr}2 & -1 & 1 \\ 2 & 3 & -2 \\ 4 & 2 & -2\end{array}\right]\).

12

By hand:

Find an LU factorization of the matrix \(A=\left[\begin{array}{rrr}2 & 1 & 0 \\ -4 & -1 & -1 \\ 2 & 3 & -3\end{array}\right]\).

2.8.5

\(L=\left[\begin{array}{rrr}1 & 0 & 0 \\ -2 & 1 & 0 \\ 1 & 2 & 1\end{array}\right]\) and \(U=\left[\begin{array}{rrr}2 & 1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & -1\end{array}\right]\).

13

By hand:

Find a PLU factorization of the matrix \(A=\left[\begin{array}{rrr}2 & 1 & 3 \\ -4 & -2 & -1 \\ 2 & 3 & -3\end{array}\right]\).

2.8.6

\(P=\left[\begin{array}{lll}1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0\end{array}\right], L=\left[\begin{array}{rrr}1 & 0 & 0 \\ 1 & 1 & 0 \\ -2 & 0 & 1\end{array}\right]\) and \(U=\left[\begin{array}{rrr}2 & 1 & 3 \\ 0 & 2 & -6 \\ 0 & 0 & 5\end{array}\right]\).