Math 641600 — Fall 2018
Assignments
Assignment 1  Due Wednesday, September 5, 2018
 Read sections 1.11.4
 Do the following problems.
 Section 1.1: 4, 5, 7(a), 8, 9(a) (Do the first 3, but without
software.)
 Section 1.2: 9
 Let $U$ be a subspace of an inner product space $V$, with the
inner product and norm being $\langle\cdot,\cdot \rangle$ and
$\\cdot\$. Also, let $v$ be in $V$. (Do not assume that
$U$ is finite dimensional or use arguments requiring a basis.)
 Fix $v\in V$. Show that there is a unique vector $p \in U$ that
satisfies $\min_{u\in U}\vu\ = \vp\$ if and only if $vp\in
U^\perp$.
 Suppose $p$ exists for every $v\in V$. Since $p$ is uniquely
determined by $v$, we may define a map $P: V \to U$ via
$Pv:=p$. Show that $P$ is a
linear map and that $P$ satisfies $P^2 = P$. ($P$ is called
an orthogonal projection. The vector $p$ is the orthogonal
projection of $v$ onto $U$.)
 If the projection $P$ exists, show that for all $w,z\in V$,
$\langle Pw,z\rangle = \langle Pw,Pz\rangle= \langle
w,Pz\rangle$. Use this to show that $U^\perp= \{w\in
V\colon Pw=0\}$.
 Suppose that the projection $P$ exists. Show that $V=U\oplus
U^\perp$, where $\oplus$ indicates the direct sum of the two
spaces. (This is easy, but important.)
 Let $U$ and $V$ be as in the previous exercise. Suppose that $U$
is finite dimensional and that $B=\{u_1,u_2,\ldots,u_n\}$ is an
ordered basis for $U$. In addition, let $G$ be the $n\times n$
matrix with entries $G_{jk}= \langle u_k,u_j\rangle$.
 Show that $G$ is positive definite and thus invertible.
 Let $v\in V$ and $d_k := \langle v,u_k\rangle$. Show that $p$
exists for every $v$ and is given by $p=\sum_j x_j u_j\in U$, where
the $x_j$'s satisfy the normal equations, $d_k = \sum_{j=1}^n
G_{kj}x_j$.
 Show that if B is orthonormal, then $Pv=\sum_j \langle
v,u_j\rangle u_j$.
Assignment 2  Due Wednesday, September 12, 2018.
 Read the notes
on
Banach spaces and Hilbert Spaces, and sections 2.1 and 2.2 in
Keener.
 Do the following problems.
 Section 1.2: 10(a,b) Hint for !0(a): You may choose the norms $\
\phi_j\$ and $\\psi_k\$ to be any (convenient) positive numbers.
 Section 1.3: 2, 3
 Find the set of biorthogonal vectors corresponding to the set
$\left\{\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}0 \\
1 \\ 1 \end{pmatrix}, \begin{pmatrix}2 \\ 1\\
0\end{pmatrix}\right\}$. Suppose that $\{\mathbf a_1, \mathbf a_2,
\ldots, \mathbf a_n\}$ is a set of linearly independent vectors in
$\mathbb R^n$. What is the corresponding set of biorthogonal vectors?
 This problem concerns several important inequalities.
 Show that if α, β are positive and α + β
=1, then for all u,v ≥ 0 we have
u^{α}v^{β} ≤ αu + βv.
 Let x,y ∈ R^{n}, and let p > 1 and define
q by q^{1} = 1  p^{1}. Prove Hölder's
inequality,
∑_{j} x_{j}y_{j} ≤ x_{p}
y_{q}.
Hint: use the inequality in part (a), but with appropriate choices of
the parameters. For example, u =
(x_{j}/x_{p})^{p}
 Let x,y ∈ R^{n}, and let p > 1. Prove
Minkowski's inequality,
x+y_{p} ≤ x_{p} + y_{p}.
Use this to show that x_{p} defines a norm on
R^{n}. Hint: you will need to use Hölder's
inequality, along with a trick.
 Find the $QR$ factorization for the matrix $A=\begin{pmatrix} 1 &
2 & 0\\ 0 & 1 & 1 \\ 1 & 0 & 1 \end{pmatrix}$. Use it to solve
$Ax=b$, where $b=\begin{pmatrix} 1\\ 3\\ 7 \end{pmatrix}$.
 Let $\mathbf y\in \mathbb R^n$. Use the normal equations for a
minimization problem to show that the minimizer of $\ \mathbf y 
A\mathbf x\$ is given by $\mathbf x_{min} = R^{1}Q^\ast \mathbf
y$. ($Q^\ast=Q^T$, since we are dealing with real scalars.)
 Let U be a unitary, n×n matrix. Show that the following hold.
 < Ux, Uy > = < x, y >
 The eigenvalues of U all lie on the unit circle, λ=1.
 Eigenvectors corresponding to distinct eigenvalues are orthogonal.
Assignment 3  Due Wednesday, September 19, 2018.
 Read Keener's sections 2.1 and the notes
on Lebesgue
integration.
 Do the following problems.
 Section 2.1: 5
 Before one can define a norm or inner product on some set, one
has to show that the set is a vector space  i.e., that
linear combinations of vectors are in the space. Do this for the
spaces of sequences below. The inequalities from the previous
assignment will be useful.
 $\ell^2=\{x=\{x_n\}_{n=1}^\infty\colon \sum_{j=1}^\infty
x_j^2\}$
 $\ell^p=\{x=\{x_n\}_{n=1}^\infty\colon \sum_{j=1}^\infty
x_j^p\}$, all $1\le p<\infty$, $p\ne 2$.
 $\ell^\infty = \{x=\{x_n\}_{n=1}^\infty\colon \sup_{1\lr
j}x_j<\infty \}$.
 Show that, for all $1\le p <\infty$, $\x\_p :=
\big(\sum_{j=1}^\infty x_j^p \big)^{1/p}$ defines a norm on
$\ell^p$.
 Show that $\ell^2$ is an inner product space, with $\langle
x,y\rangle = \sum_{j1}^\infty x_j \bar y_j$ being the inner product, and
that with this inner product it is a Hilbert space. Bonus: show that
it is separable.
 Let $C^1[0,1]$ be the set of all continuously differentiable
realvalued functions on $[0,1]$. Show that $C^1[0,1]$ is a Banach
space if $\f\_{C^1} := \max_{x\in [0,1]}f(x) + \max_{x\in
[0,1]}f'(x)$.
 Let $f\in C^1[0,1]$. Show that
$\f\_{C[0,1]}\le C\f\_{H^1[0,1]}$, where $C$ is a constant
independent of $f$ and $\f\_{H^1[0,1]}^2 := \int_0^1\big( f(x)^2 +
f'(x)^2\big)dx$.
 A measurable function whose range consists of a finite number of
values is a simple function —
see Lebesgue
integration, p. 5. Use the definition of the Lebesgue integral in
in terms of Lebesgue sums, from eqn. 2, to show that, in terms of this
definition, the integral of a simple function ends up being the one in
eqn. 3 on p. 6.
Assignment 4  Due Wednesday, September 26, 2018.
 Read the notes on
Lebesgue
integration and
on Orthonormal
sets and expansions.
 Do the following problems.
 Section 2.1: 10
 Section 2.2: 1 (Use $w=1$.), 8(a,b,c) (FYI: the formula for
$T_n(x)$ has an $n!$ missing in the numerator.), 9
 This problem is aimed at showing that the Chebyshev polynomials
form a complete set in $L^2_w$, which has the weighted inner product
\[ \langle f,g\rangle_w := \int_{1}^1
\frac{f(x)\overline{g(x)}dx}{\sqrt{1  x^2}}. \]
 Show that the continuous functions are dense in $L^2_w$. Hint: if
$f\in L^2_w$, then $ \frac{f(x)}{(1  x^2)^{1/4}}$ is in $L^2[1,1]$.
 Show that if $f\in L^\infty[1,1]$, then $\f\_w \le
\sqrt{\pi}\f\_\infty$.
 Follow the proof given in
the notes on Orthonormal
Sets and Expansions showing that the Legendre polynomials form a
complete set in $L^2[1,1]$ to show that the Chebyshev polynomials
form a complete orthogonal set in $L^2_w$.
 Let F(s) = ∫_{ 0}^{∞} e^{ − s
t} f(t)dt be the Laplace transform of f ∈
L^{1}([0,∞)). Use the Lebesgue dominated convergence
theorem to show that F is continuous from the right at s = 0. That is,
show that
lim_{ s↓0} F(s) = F(0) = ∫_{
0}^{∞}f(t)dt
 Let f_{n}(x) = n^{3/2} x e^{n x}, where
x ∈ [0,1] and n = 1, 2, 3, ....
 Verify that the pointwise limit of f_{n}(x) is f(x) = 0.
 Show that f_{n}_{C[0,1]} → ∞ as n
→ ∞, so that f_{n} does not converge uniformly to
0.
 Find a constant C such that for all n and x fixed
f_{n}(x) ≤ C x^{−1/2}, x ∈ (0,1].
 Use the Lebesgue dominated convergence theorem to show that
lim_{ n→∞} ∫_{ 0}^{1}
f_{n}(x)dx = 0.
 Let $U:=\{u_j\}_{j=1}^\infty$ be an orthonormal set in a Hilbert
space $\mathcal H$. Show that the two statements are
equivalent. (You may use what we have proved for o.n. sets in
general; for example, Bessel's inequality, minimization properties,
etc.)
 $U$ is maximal in the sense that there is no nonzero vector in
$\mathcal H$ that is orthogonal to $U$. (Equivalently, $U$ is not a
proper subset of any other o.n. set in $\mathcal H$.)
 Every vector in $\mathcal H$ may be uniquely represented as the
series $f=\sum_{j=1}^\infty \langle f, u_j\rangle u_j$.
Assignment 5  Due Wednesday, October 3, 2018.
 Read sections 2.2.22.2.4 and the notes on
Approximation
of Continuous Functions.
 Do the following problems.
 Section 2.2: 10
 In proving the Weierstrass Approximation, we did the case
$x>j/n$. Do the case $x < j/n$.
 Let $\delta>0$. We define the modulus of continuity for $f\in
C[0,1]$ by $\omega(f,\delta) := \sup_{\,\,st\,\,\le\,
\delta,\,s,t\in [0,1]}f(s)f(t)$.
 Fix $\delta>0$. Let $S_\delta = \{ \epsilon >0 \colon f(t)  f(s)
< \epsilon \, \forall\ s,t \in [0,1], \ s  t \le \delta\}$. In other
words, for given $\delta$, $S_\delta$ is in the set of all
$\epsilon$ such that $f(t)  f(s) < \epsilon$ holds for all $s 
t\le \delta$. Show that $\omega(f, \delta) = \inf S_\delta$
 Show that $\omega(f,\delta)$ is non decreasing as a
function of $\delta$. (Or, more to the point, as $\delta \downarrow 0$,
$\omega(f,\delta)$ gets smaller.)
 Show that $\lim_{\delta \downarrow 0} \omega(f,\delta) = 0$.

 Let g be C^{2} on an interval
[a,b]. Let h = b − a. Show that if g(a) = g(b) = 0, then $
\g\_{C[a,b]} \le (h^2/8)
\g''\_{C[a,b]}$. Give an example that shows
that $1/8$ is the best possible constant.
 Use the previous part to show that if f ∈
C^{2}[0,1], then the equally spaced linear spline interpolant
$s_f$ satisfies $\f  s_f\_{C[0,1]} \le (8n^2)^{1}\f''\_{C[0,1]}$.
 Let $f(x)$ be continuous on $[0,1]$ and let $s_f(x)$ be the
linear spline for $f$ with equally spaced points $j/n$, where $j=0,
1,2,\ldots,n$.
 Show that $\int_0^1s_f(x)dx$ is equal to the trapezoidal
(quadrature) rule for approximating $\int_0^1f(x)dx$.
 Let $E=\big\int_0^1f(x)dx  \int_0^1 s_f(x)dx\big$ be the
quadrature error. If $f\in C^2[0,1]$, use the previous problem to show
that $E\le (8n^2)^{1}\f''\_{C[0,1]}$.
 Show that, in terms of the Bernstein polynomials $\beta_{j,n}$,
\[
x^k = \sum_{j=k}^n\frac{\binom{j}{k}}{\binom{n}{k}}\beta_{j,n}(x),
\]
where $k=0,1, 2, \ldots,x^n$.
Assignment 6  Due Wednesday, October 10, 2018.
 Read sections 2.2.22.2.4, the notes
on Fourier
series, and the notes on
the
discrete Fourier transform.
 Do the following problems.
 Section 2.2: 14
 Prove this: Let $g$ be a $2\pi$ periodic function (a.e.) that
is integrable on each bounded interval in $\mathbb R$. Then,
$\int_{\pi+c}^{\pi+c} g(u)du$ is independent of $c$. In particular,
$\int_{\pi+c}^{\pi+c} g(u)du=\int_{\pi}^\pi g(u)du$.
 Compute the Fourier series for the following functions.
 f(x) = x, 0≤ x ≤ 2π
 f(x) = x, − π ≤ x ≤ π
 f(x) = e^{2x}, − π ≤ x ≤ π (complex form).
 Compute the complex form of the Fourier series for $f(x) =
e^{2x}$, $0 \le x \le 2\pi$. Why is this different from 3(c) above?
Use this Fourier series and Parseval's theorem to sum the series
$\sum_{k=\infty}^\infty (4+k^2)^{1}$.
 The following problem is aimed at showing that
$\{e^{inx}\}_{n=\infty}^\infty$ is complete in $L^2[\pi,\pi]$.
 Consider the series ∑_{n} c_{n}
e^{inx}, where ∑_{n} c_{n} <
∞. Show that ∑_{n} c_{n} e^{inx}
converges uniformly to a continuous function f(x) and that the series
is the Fourier series for f. (It's possible for a trigonometric
series to converge pointwise to a function, but not be
the Fourier series for that function.)
 Use the previous problem to show that if $f$ is a continuous,
piecewise smooth $2\pi$periodic function, then the FS for $f$
converges uniformly to $f$. (Hint: Show that if $f'\in L^2[\pi,\pi]$,
then series $\sum_{k=\infty}^\infty k^2c_k^2$ is convergent.)
 Apply this result to show that the FS for a linear spline $s(x)$,
which satisfies $s(\pi)=s(\pi)$, is uniformly convergent to
$s(x)$. Show that such splines are dense in $L^2[\pi,\pi]$.
 Show that $\{e^{inx}\}_{n=\infty}^\infty$ is complete in
$L^2[\pi,\pi]$.
 Let $\mathcal S_n$ be the set of $n$periodic,
complexvalued sequences.
 Suppose that $\mathbf x \in \mathcal S_n$. Show that $
\sum_{j=m}^{m+n1}{\mathbf x}_j = \sum_{j=0}^{n1}{\mathbf x}_j
$. (This is the DFT analogue of problem 1 above.)
 Prove the Convolution Theorem for
the DFT. (See Notes on the
Discrete Fourier Transform, pg. 3.)
Assignment 7  Due Wednesday, October 24, 2018.
 Read section 2.2.7 and the notes on Splines
and Finite Element Spaces.
 Do the following problems.
 Section 2.2: 18(a,d). (Both of these use the formula
$N_m(x)=\frac{x}{m1}N_{m1}(x)+\frac{mx}{m1}N_{m1}(x1)$, together
with induction).
 Let $f(t)=10\cos(2t)$ and consider the ODE $u''+2u'+2u=f(t)$.
 Verify that the general solution to the equation is
$u=Ae^{t}\cos(t)+ Be^{t}\sin(t) +2\sin(2t)\cos(2t)$; consequently
the "steady state" periodic solution is $u_p(t)=2\sin(2t)\cos(t)$.
 Let $n=2^L$ and $h=\frac{2\pi}{n}$. For $L=3,5,8,\text{and}\ 10$,
sample $f$ at $jh$, $j=0\ldots n1$; let $f_j:=f(jh)$. Use your
favorite program to find the FFT of $\{f_0,f_1,\ldots,f_{n1}\}$ and,
using the method outlined in the notes on
the
discrete Fourier transform, find $\hat u_k$. Finally, apply your
program's inverse FFT to the $\hat u_k$'s to obtain the approximation
$u_j$ to $u_p$ at $jh$. For each $L$, plot the $u_j$'s and the
$u_p(jh)$'s. The $u_j$'s may have a small complex part due to roundoff
error; just plot the real parts of the $u_j$'s you found by the
procedure above. Be sure to label your plots.
 For each $L$ plot the error $\{ u_0u_p(0),u_1u_p(h),\ldots,
u_{n1}u_p((n1)h)\}$; again, label your plots.
 Let α, ξ, η be nperiodic sequences, and let a, x, y
be column vectors with entries a_{0}, ..., a_{n1},
etc. Show that the convolution η = α∗ξ is
equivalent to the matrix equation y = Ax, where A is an n×n
matrix whose first column is $\mathbf a$, and whose remaining columns
are $\mathbf a$ with the entries cyclically permuted. Such matrices
are called cyclic. Use the DFT and the convolution theorem to find the
eigenvalues of the a cyclic matrix. Use this method, along with your
favorite software, to find the eigenvalues and eigenvectors of the
matrix below. (For this matrix, $\mathbf a =(3\ 1\ 4\ 5)^T$.)
\[
\begin{pmatrix}
3 &5 &4 &1 \\
1 &3 &5 &4 \\
4 &1 &3 &5\\
5 &4 &1 &3
\end{pmatrix}
\]
 Let $S^{1/n}(1,0)$ be the space of piecewise linear splines, with
knots at $x_j=j/n$, and let $N_2(x)$ be the linear Bspline ("tent
function", see Keener, p. 81 or my notes on splines.)
 Let $\phi_j(x):= N_2(nx +1 j)$. Show that
$\{\phi_j(x)\}_{j=0}^n$ is a basis for $S^{1/n}(1,0)$.
 Let $S_0^{1/n}(1,0):=\{s\in S^{1/n}(1,0):s(0)=s(1)=0\}$. Show that
$S_0^{1/n}(1,0)$ is a subspace of $S^{1/n}(1,0)$ and that
$\{\phi_j(x)\}_{j=1}^{n1}$ is a basis for it.
Assignment 8  Due Wednesday, October 31, 2018.
 Read section 2.2.7, the notes
on Splines
and Finite Element Spaces, and on Bounded
Operators & Closed Subspaces.
 Do the following problems.
 Section 2.2: 25(a,b), 26(b), 27(a)
 Consider the space of cubic Hermite splines
$S_0^{1/n}(3,1)\subset S^{1/n}(3,1)$ that satisfy $s(0)=s(1)=0$. Show
that $\langle u,v\rangle = \int_0^1 u''v''dx$ defines an inner product
on $S_0^{1/n}(3,1)$.
 We want to use a Galerkin method to numerically solve the
boundary value problem (BVP): −u" = f(x), u(0) = u(1) = 0,
f ∈ C[0,1]. Let $H^1_0$ be the space of all functions $g:[0,1]\to
\mathbb R$ such that $g'$ is in $L^2[0,1]$ and $g(0)=g(1)=0$. Define
an inner product on $H^1_0$ by $ \langle f,g\rangle_{H^1_0}=\int_0^1
f'g'dx$. Your are given that $H^1_0$ is a Hilbert space.
 Weak form of the problem. Suppose that
$v\in H^1_0$. Multiply both sides of the equation $u''=f$ by $v$ and use
integration by parts to show that $ \langle u,v\rangle_{H^1_0} =
\langle f,v\rangle_{L^2[0,1]}$. This is called the ``weak'' form of
the BVP.
 Conversely, suppose that $u\in H^1_0$ is also in
$C^2[0,1]$ and that if for all $v\in H^1_0$, $u$ satisfies
\[
\langle u,v\rangle_{H^1_0} =
\langle f,v\rangle_{L^2[0,1]}.
\]
then $u''=f$.
 Let $S_0^{1/n}(1,0) \subset S^{1/n}(1,0)$ be the set of all
linear splines that are $0$ at $x=0$ and $x=1$; note that
$S_0^{1/n}(1,0)$ a subspace of $H^1_0$. Let $s_n\in S^{1/n}_0(1,0)$
satisfy $\us_n\_{H^1_0} = \min_{s\in S^{1/n}_0(1,0)}\u 
s\_{H^1_0}$; thus, $s_n$ is the leastsquares approximation to $u$
from $S^{1/n}_0(1,0)$. Expand $s_n$ in the basis from Assignment
7, problem 4(b): $s_n =
\sum_{j=1}^{n1}\alpha_j\phi_j$ Use the normal equations for the
problem in connection with the weak form of the problem to show that
the $\alpha_j$'s satisfy $G\alpha = \beta$, where $\beta_j= \langle
f,\phi_j\rangle_{L^2[0,1]}$ and $G_{kj} =\langle
\phi_j,\phi_k\rangle_{H_0}$
 Show that
$
G=\begin{pmatrix} 2n& n &0 &\cdots &0\\
n & 2n& n &0 &\cdots \\
0&n& 2n& \ddots &\ddots \\
\vdots &\cdots &\ddots &\ddots &n\\
0 &\cdots &0 &n &2n
\end{pmatrix}.
$
Assignment 9  Due Wednesday, November 7, 2018.
 Read sections 3.13.3, the notes
on
the projection theorem, the Riesz representation theorem, etc, and the notes on
an
example of the Fredholm alternative and finding a resolvent
.
 Do the following problems.
 Section 3.2: 3(d) (Assume the appropriate
operators are closed and that λ is real.)
 Section 3.3: 2 (Assume the appropriate
operators are closed and that λ is real.)
 Let $V$ be a subspace of a Hilbert space $\mathcal H$. Show that
$(V^\perp)^\perp= \overline{V}$, where $\overline{V}$ is the closure
of $V$ in $\mathcal H$. Use this to show that $\mathcal H =
\overline{V}\oplus V^\perp$.
 Let V be a Banach space. Show that a linear operator L:V → V
is bounded if and only if L is continuous.
 Let $k(x,y)$ be defined by
\[
k(x,y) = \left\{
\begin{array}{cl}
y, & 0 \le y \le x\le 1, \\
x, & x \le y \le 1.
\end{array}
\right.
\]

Let $L$ be the integral operator $L\,f = \int_0^1 k(x,y)f(y)dy$. Show
that $L:C[0,1]\to C[0,1]$ is bounded and that the norm
$\L\_{C[0,1]\to C[0,1]}\le 1$. Bonus (5 pts.): Show that
$\L\_{C[0,1]\to C[0,1]}=1/2$.
 Show that $k(x,y)$ is a HilbertSchmidt
kernel and that $\L\_{L^2\to L^2} \le \sqrt{\frac{1}{6}}$.
 Finish the proof of the Projection Theorem: If for every $f\in
\mathcal H$ there is a $p\in V$ such that $\pf\=\min_{v\in
V}\vf\$ then $V$ is closed.
 Let L be a bounded linear operator on Hilbert space $\mathcal
H$. Show that these two formulas for $\L\$ are equivalent:
 $\L\ = \sup \{\Lu\ : u \in {\mathcal H},\ \u\ = 1\}$
 $\L\ = \sup \{\langle Lu,v\rangle : u,v \in {\mathcal H},\
\u\=\v\=1\}$
Assignment 10  Due Wednesday, November 14, 2018.
 Read sections 3.33.5, and my notes on Compact
Operators, and on the
Closed Range Theorem.
 Do the following problems.
 Section 3.3: 1 (Assume the appropriate
operators are closed and that λ is real.)
 Section 3.4: 2(b)
 Consider the Hilbert space $\mathcal H=\ell^2$ and let
$S=\{x=(x_{1}\ x_{2}\ x_3\ \ldots)\in \ell^2:
\sum_{n=1}^\infty (n^2+1)x_n^2 <1\}$. Show that $S$ is a
precompact subset of $\ell^2$.
 Let $S$ be a bounded subset (not a subspace!) of a Hilbert space
$\mathcal H$. Show that $S$ is precompact if and only if every
sequence in $S$ has a convergent subsequence. (Note: If $S$ is just
precompact, the limit point of the sequence may not be in $S$, because
$S$ may not be closed.)
 Show that every compact operator on a Hilbert space is bounded.
 Consider the finite rank (degenerate) kernel
k(x,y) =
φ_{1}(x)ψ_{1}(y) +
φ_{2}(x)ψ_{2}(y),
where φ_{1} = 6x3, φ_{2} = 3x^{2},
ψ_{1} = 1, ψ_{2} = 8x − 6.
Let Ku= ∫_{0}^{1} k(x,y)u(y)dy. Assume that L =
Iλ K has closed range,

For what values of λ does the integral equation
u(x)  λ∫_{0}^{1} k(x,y)u(y)dy =f(x)
have a solution for all f ∈ L^{2}[0,1]?
 For these values, find the solution u = (I −
λK)^{−1}f — i.e., find the resolvent.
 For the values of λ for which the equation
does not have a solution for all f, find a condition on f
that guarantees a solution exists. Will the solution be unique?
 In the following, H is a Hilbert space and B(H) is the set of
bounded linear operators on H. Let L be in B(H) and let N:= sup
{< Lu, u> : u ∈ H, u = 1}.
 Verify the identity < L(u+αv), u+αv> − <
L(uαv), uαv> = 2α<
Lu,v>+2α< Lv,u>, where α = 1.
 Show that N ≤ L.
 Let L be a selfadjoint operator on H,
which may be real or complex. Use (a) and (b) to show that N=
L. (Hint: In the complex case, choose α so
that α< Lu,v> = <
Lu,v>. For the real case, use $\alpha=\pm 1$, as required.)
 Suppose that H is a complex Hilbert space. If L ∈
B(H), then use (a) and (b) to show that
N ≤ L ≤ 2N.
 For the real Hilbert space, H = R^{2}, let $L =
\begin{pmatrix}
0& 1\\
1 & 0 \end{pmatrix}.
$
Show that $L = 1$, but $N=0$.
Assignment 11  Due Monday, November 26, 2018.
 Read sections 3.33.5, and my notes on and my notes on
Spectral Theory for Compact Operators.
 Do the following problems.
 Section 3.4: 2(a)
 Let $L\in \mathcal B(\mathcal H)$. Suppose that for all $f\in
N(L)^\perp$ there is a constant $c>0$ such that $\Lf\\ge c\f\$, where
$c$ is independent of $f$. Show that $R(L)$ is closed.
 Finish the proof of Proposition 2.5 in my notes on
Compact Operators
 Consider the kernel $k(x,y)=\min(x,y)$, $0\le x,y\le 1$.
 Show that $Ku=\int_0^1 k(x,y)u(y)dy$ is a compact, selfadjoint
operator operator.
 Let $U(x)=\int_0^x u(y)dy  \int_0^1 u(y)dy$. Show that $Ku(x) =
\int_0^x U(y)dy$, and that $ \int_0^1 Ku(x)\,u(x)dx = \int_0^1 U(x)^2dx$.
 Use this identity to show that $0$ is not an eigenvalue of
$K$ — i.e., $N(K)=\{0\}$.
 Show that there is no constant $c>0$ such that $c\u\\le
\Ku\$. Explain why this implies $K^{1} \not\in \mathcal B(\mathcal
H)$. (Hint: consider the sequence $u_n(x) = \sqrt{2} \cos(n\pi
x)$.) The point here is that $\lambda=0$ is not an eigenvalue of $K$,
but is in the spectrum of $K$.
Assignment 12  Due Wednesday, December 5, 2018.
 Read sections 4.1, 4.2, 4.3.1, 4.3.2, 4.5.1 and my notes on and
my notes on
example problems for distributions.
 Do the following problems.
 Section 3.4: 2(d) (You may use problem 4 from Assignment 11.)
 Section 4.1: 4, 7
 Section 4.2: 1, 3, 4
 Section 4.3: 3
 Let $Ku(x)=\int_0^1 k(x,y)u(y)dy$, where $k(x,y)$ is defined by $
k(x,y) = \left\{
\begin{array}{cl}
y, & 0 \le y \le x\le 1, \\
x, & x \le y \le 1.
\end{array}
\right.$
 Show that $0$ is not an eigenvalue of $K$.
 Show that $Ku(0)=0$ and $(Ku)'(1)=0$.
 Find the eigenvalues and eigenvectors of $K$. Explain why the
(normalized) eigenvectors of $K$ are a complete orthonormal basis for
$L^2[0,1]$.
 Let $Lu=u''$, $u(0)=0$, $u'(1)=2u(1)$.
 Show that the Green's function
for this problem is
\[
G(x,y)=\left\{
\begin{array}{rl}
(2y1)x, & 0 \le x < y \le 1\\
(2x1)y, & 0 \le y< x \le 1.
\end{array} \right.
\]
 Let $Kf(x) := \int_0^1G(x,y)f(y)dy$. Show that $K$ is a selfadjoint
HilbertSchmidt operator, and that $0$ is not an eigenvalue of $K$.
 Use (b) and the spectral theory of compact operators to show the
orthonormal set of eigenfunctions for $L$ form a complete set in
$L^2[0,1]$.
Updated 11/26/2018.