Homework 6 Solutions

1

Show that if the real matrix \(A\) is orthogonally diagonalizable, then \(A\) is symmetric.

5.4.13

Say \(P^{-1} A P=D, P\) orthogonal so that \(P^{-1}=P^{T}\) and \(D\) diagonal. Then \(A=P D P^{T}\), so \(A^{T}=\left(P D P^{T}\right)^{T}=\left(P^{T}\right)^{T} D^{T} P^{T}=P D P^{T}=A\).

2

Use a technology tool to compute an orthonormal basis for the null space and column space of the following matrices with the SVD and Corollary 5.7. You will have to decide which nearly-zero terms are really zero.

  1. \(\left[\begin{array}{rrr}1 & 1 & 3 \\ 0 & -1 & 0 \\ 1 & -2 & 2 \\ 3 & 0 & 2\end{array}\right]\)

  2. \(\left[\begin{array}{rrr}3 & 1 & 2 \\ 4 & 0 & 1 \\ -1 & 1 & 1\end{array}\right]\)

  3. \(\left[\begin{array}{ccccr}1 & 0 & 1 & 0 & -3 \\ 1 & 2 & 1 & -5 & 2 \\ 0 & 1 & 0 & -3 & 1 \\ 0 & 2 & -3 & 1 & 4\end{array}\right]\)

5.6.3

For each matrix, Calculate \(U, \Sigma, V\).

Then using these, the null space and column space bases (respectively) are:

  1. First three columns of \(U,\{\}\)

  2. First two columns of \(U\), third column of \(V\)

  3. First four columns of \(U\), fifth column of \(V\)

import numpy as np
a = np.array([[1,1,3],[0,-1,0],[1,-2,2],[3,0,2]])
U,D,V=np.linalg.svd(a)
print("U")
display(U)
print("D")
display(D)
print("V")
display(V)
U
array([[-0.57645926,  0.52157815,  0.59550425,  0.20254787],
       [-0.01419674, -0.40963687,  0.03502648,  0.91146543],
       [-0.46215743, -0.74483695,  0.32555706, -0.35445878],
       [-0.67372374,  0.07329248, -0.73359419,  0.05063697]])
D
array([5.05000063, 2.43101652, 1.60861814])
V
array([[-0.60589852,  0.07169352, -0.79230488],
       [-0.00139092,  0.99583401,  0.091174  ],
       [-0.79554073, -0.05634423,  0.60327463]])
b = np.array([[1,0,1,0,-3],[1,2,1,-5,2],[0,1,0,-3,1],[0,2,-3,1,4]])
U,D,V=np.linalg.svd(b)
print("U")
display(U)
print("D")
display(D)
print("V")
display(V)
U
array([[ 0.27801173, -0.38661197,  0.87047657,  0.12454404],
       [-0.74697535, -0.45876267,  0.10209624, -0.4702563 ],
       [-0.42320738, -0.22566126, -0.08994621,  0.87285863],
       [-0.43085127,  0.76755781,  0.47302916,  0.03828318]])
D
array([7.03992735, 5.85513471, 1.64000865, 0.6835145 ])
V
array([[-0.06661484, -0.39472859,  0.11698845,  0.64967256, -0.63560292],
       [-0.14438176,  0.06693766, -0.53765596,  0.63847463,  0.52720914],
       [ 0.59302907,  0.64652378, -0.27226361,  0.14169838, -0.36894121],
       [-0.5057863 ,  0.01303907, -0.67381425, -0.33504951, -0.42157607],
       [ 0.60598112, -0.64926548, -0.41120147, -0.19477964,  0.06492655]])
b = np.array([[3,1,2],[4,0,1],[-1,1,1]])
U,D,V=np.linalg.svd(b)
print("U")
display(U)
print("D")
display(D)
print("V")
display(V)
U
array([[-0.66164043,  0.47843349, -0.57735027],
       [-0.74515577, -0.33378068,  0.57735027],
       [ 0.08351534,  0.81221417,  0.57735027]])
D
array([5.45592754e+00, 2.05739026e+00, 2.95442688e-16])
V
array([[-0.92542646, -0.10596275, -0.36381005],
       [-0.34608718,  0.62732272,  0.69763161],
       [-0.15430335, -0.77151675,  0.6172134 ]])

3

Use the pseudoinverse to find a least squares solution \(A \mathbf{x}=\mathbf{b}\), where \(A\) is a matrix from Exercise 3 with corresponding right-hand side below.

  1. \((2,2,6,5)\)

  2. \((2,3,1)\)

  3. \((4,1,2,3)\)

5.6.4

  1. \((1.10513,-1.67692,0.83333)\)

  2. \((0.46032,0.30159,0.49206)\)

  3. \((0.36721,2.03513,-2.67775,0.68946,-2.10351)\)

4

Use a technology tool to construct a \(3 \times 10\) table whose \(j\) th column is \(A^{j} \mathbf{x}\), where \(\mathbf{x}=(1,1,1)\) and \(A=\left[\begin{array}{rrr}10 & 17 & 8 \\ -8 & -13 & -6 \\ 4 & 7 & 4\end{array}\right]\). What can you deduce about the eigenvalues of \(A\) based on inspection of this table? Give reasons. Check your claims by finding the eigenvalues of \(A\).

5.3.14

import sympy as sp

A = sp.Matrix([[10, 17, 8], [-8, -13, -6], [4, 7, 4]])
display(A)
x = sp.Matrix([1, 1, 1])
ev1 = A.eigenvects()[0][2][0]
ev2 = A.eigenvects()[1][2][0]
ev3 = A.eigenvects()[2][2][0]
ev1, ev2, ev3
mytable = sp.Matrix([(A**j * x).T for j in range(1, 11)]).T
mytable

\(\displaystyle \left[\begin{matrix}10 & 17 & 8\\-8 & -13 & -6\\4 & 7 & 4\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}35 & 11 & -125 & -29 & 515 & 131 & -2045 & -509 & 8195 & 2051\\-27 & -19 & 93 & 61 & -387 & -259 & 1533 & 1021 & -6147 & -4099\\15 & 11 & -45 & -29 & 195 & 131 & -765 & -509 & 3075 & 2051\end{matrix}\right]\)

Largest eigenvalue must be greater than one because the numbers grow, and complex because they take two iterations to change sign. But the complex part probably comes in pairs which cancel each other.

5

Note

Please note that in definition 6.20 in your book, there is an apparent error. The formula should be \(H(\zeta)=\sum_{n=-\infty}^{\infty} h_n e^{-i \zeta n}\). Note the \(n\) in the exponent!

The sequence in the first paragraph of page 438 makes this clear, where it says “Here \(H(\zeta) = h_0 + h_1 e^{i \zeta} + \dots + h_L e^{i L \zeta}\).”

Compute the DTFT for the FIR filter \(\mathbf{h}=\left\{\frac{1}{2}, \frac{1}{2}\right\}\) and confirm that \(\mathbf{h}\) is a lowpass filter.

6.6.1

\(H(\zeta)=e^{i \zeta/2}\cos(\zeta/2)\), so \(|H(0)=1\) and \(H(\pi)=0\).

6

Compute the DTFT for the FIR filter \(\mathbf{h}=\left\{-\frac{1}{2}, \frac{1}{2}\right\}\) and confirm that \(\mathbf{h}\) is a highpass filter.

6.6.2

The gain of this filter is

\[ H(\zeta)=-\frac{1}{2}+\frac{1}{2}e^{i\zeta}=i e^{\beta \frac{\zeta}{2}}\left(\frac{e^{\beta \frac{\zeta}{2}}-e^{-\beta \frac{\zeta}{2}}}{2i}\right)=i e^{\beta \frac{\zeta}{2}}\sin \beta \frac{\zeta}{2} \]

Thus \(|H(0)=0\) and \(H(\pi)=1\), as required for a high-pass filter.

7

Apply the filters of the previous two exercises to the sampling problem of Example 6.24 and graph the results as in Figure 2.

6.6.5 6.6.6

8

Compute the Fourier series for \(x(t) \in C_{P W}^{1}[-\pi, \pi]\), where \(x(t)=t^{2} / \pi,-\pi \leq t \leq \pi\) and graph \(x(t)\) and the partial Fourier sums with \(N=3,6\).

6.6.3