Math 641600 — Fall 2017
Assignments
Assignment 1  Due Wednesday, September 6, 2017.
 Read sections 1.11.4
 Do the following problems.
 Section 1.1: 4, 5, 7(a), 8, 9(a) (Do the first 3, but without
software.)
 Section 1.2: 9
 Let $U$ be a subspace of an inner product space $V$, with the
inner product and norm being $\langle\cdot,\cdot \rangle$ and
$\\cdot\$. Also, let $v$ be in $V$. (Do not assume that
$U$ is finite dimensional or use arguments requiring a basis.)
 Fix $v\in V$. Show that there is a unique vector $p \in U$ that
satisfies $\min_{u\in U}\vu\ = \vp\$ if and only if $vp\in
U^\perp$.
 Suppose $p$ exists for every $v\in V$. Since $p$ is uniquely
determined by $v$, we may define a map $P: V \to U$ via
$Pv:=p$. Show that $P$ is a
linear map and that $P$ satisfies $P^2 = P$. ($P$ is called
an orthogonal projection. The vector $p$ is the orthogonal
projection of $v$ onto $U$.)
 If the projection $P$ exists, show that for all $w,z\in V$,
$\langle Pw,z\rangle = \langle Pw,Pz\rangle= \langle
w,Pz\rangle$. Use this to show that $U^\perp= \{w\in
V\colon Pw=0\}$.
 Suppose that the projection $P$ exists. Show that $V=U\oplus
U^\perp$, where $\oplus$ indicates the direct sum of the two
spaces. (This is easy, but important.)
 Let $U$ and $V$ be as in the previous exercise. Suppose that $U$
is finite dimensional and that $B=\{u_1,u_2,\ldots,u_n\}$ is an
ordered basis for $U$. In addition, let $G$ be the $n\times n$
matrix with entries $G_{jk}= \langle u_k,u_j\rangle$.
 Show that $G$ is positive definite and thus invertible.
 Let $v\in V$ and $d_k := \langle v,u_k\rangle$. Show that $p$
exists for every $v$ and is given by $p=\sum_j x_j u_j\in U$, where
the $x_j$'s satisfy the normal equations, $d_k = \sum_{j=1}^n
G_{kj}x_j$.
 Show that if B is orthonormal, then $Pv=\sum_j \langle
v,u_j\rangle u_j$.
Assignment 2  Due Wednesday, September 13, 2017.
 Read the notes
on
Banach spaces and Hilbert Spaces, and sections 2.1 and 2.2 in
Keener.
 Do the following problems.
 Section 1.2: 10
 Section 1.3: 2, 3, 5
 This problem concerns several important inequalities.
 Show that if α, β are positive and α + β
=1, then for all u,v ≥ 0 we have
u^{α}v^{β} ≤ αu + βv.
 Let x,y ∈ R^{n}, and let p > 1 and define
q by q^{1} = 1  p^{1}. Prove Hölder's
inequality,
∑_{j} x_{j}y_{j} ≤ x_{p}
y_{q}.
Hint: use the inequality in part (a), but with appropriate choices of
the parameters. For example, u =
(x_{j}/x_{p})^{p}
 Let x,y ∈ R^{n}, and let p > 1. Prove
Minkowski's inequality,
x+y_{p} ≤ x_{p} + y_{p}.
Use this to show that x_{p} defines a norm on
R^{n}. Hint: you will need to use Hölder's
inequality, along with a trick.
 Let $\{v_1,\ldots,v_m\}$ be a set of linearly independent vectors
in $\mathbb R^n$, with $m < n$, and let $A := [v_1\ \cdots \ v_m]$;
that is, $A$ is an $n\times m$ matrix having the $v_j$'s for columns.
 Use GramSchmidt to show that $A=QR$, where $Q$ is an $n\times m$
matrix having columns that are orthonormal and $R$ is an invertible,
upper triangular $m\times m$ matrix.
 Let $\mathbf y\in \mathbb R^n$. Use the normal equations for a
minimization problem to show that the minimizer of $\ \mathbf y 
A\mathbf x\$ is given by $\mathbf x_{min} = R^{1}Q^\ast \mathbf y$.
 Let U be a unitary, n×n matrix. Show that the following hold.
 < Ux, Uy > = < x, y >
 The eigenvalues of U all lie on the unit circle, λ=1.
 Eigenvectors corresponding to distinct eigenvalues are orthogonal.
Assignment 3  Due Wednesday, September 20, 2017.
 Read Keener's sections 2.1 and the notes
on Lebesgue
integration.
 Do the following problems.
 Section 2.1: 4, 5
 Before one can define a norm or inner product on some set, one
has to show that the set is a vector space  i.e., that
linear combinations of vectors are in the space. Do this for the
spaces of sequences below. The inequalities from the previous
assignment will be useful.
 $\ell^2=\{x=\{x_n\}_{n=1}^\infty\colon \sum_{j=1}^\infty
x_j^2\}$
 $\ell^p=\{x=\{x_n\}_{n=1}^\infty\colon \sum_{j=1}^\infty
x_j^p\}$, all $1\le p<\infty$, $p\ne 2$.
 $\ell^\infty = \{x=\{x_n\}_{n=1}^\infty\colon \sup_{1\lr
j}x_j<\infty \}$.
 Show that, for all $1\le p <\infty$, $\x\_p :=
\big(\sum_{j=1}^\infty x_j^p \big)^{1/p}$ defines a norm on
$\ell^p$.
 Show that $\ell^2$ is an inner product space, with $\langle
x,y\rangle = \sum_{j1}^\infty x_j \bar y_j$ being the inner product, and
that with this inner product it is a Hilbert space. Bonus: show that
it is separable.
 Let $C^1[0,1]$ be the set of all continuously differentiable
realvalued functions on $[0,1]$. Show that $C^1[0,1]$ is a Banach
space if $\f\_{C^1} := \max_{x\in [0,1]}f(x) + \max_{x\in
[0,1]}f'(x)$.
 Let $f\in C^1[0,1]$. Show that
$\f\_{C[0,1]}\le C\f\_{H^1[0,1]}$, where $C$ is a constant
independent of $f$ and $\f\_{H^1[0,1]}^2 := \int_0^1\big( f(x)^2 +
f'(x)^2\big)dx$.
Assignment 4  Due Friday, September 29, 2017.
 Read the notes on
Lebesgue
integration and
on Orthonormal
sets and expansions.
 Do the following problems.
 Section 2.1: 10, 11
 Section 2.2: 1 (Do $w=1$.), 10
 A measurable function whose range consists of a finite number of
values is a simple function —
see Lebesgue
integration, p. 5. Use the definition of the Lebesgue integral in
in terms of Lebesgue sums, from eqn. 2, to show that, in terms of this
definition, the integral of a simple function ends up being the one in
eqn. 3 on p. 6.
 Let F(s) = ∫_{ 0}^{∞} e^{ − s
t} f(t)dt be the Laplace transform of f ∈
L^{1}([0,∞)). Use the Lebesgue dominated convergence
theorem to show that F is continuous from the right at s = 0. That is,
show that
lim_{ s↓0} F(s) = F(0) = ∫_{
0}^{∞}f(t)dt
 Let f_{n}(x) = n^{3/2} x e^{n x}, where
x ∈ [0,1] and n = 1, 2, 3, ....
 Verify that the pointwise limit of f_{n}(x) is f(x) = 0.
 Show that f_{n}_{C[0,1]} → ∞ as n
→ ∞, so that f_{n} does not converge uniformly to
0.
 Find a constant C such that for all n and x fixed
f_{n}(x) ≤ C x^{−1/2}, x ∈ (0,1].
 Use the Lebesgue dominated convergence theorem to show that
lim_{ n→∞} ∫_{ 0}^{1}
f_{n}(x)dx = 0.
 Let $U:=\{u_j\}_{j=1}^\infty$ be an orthonormal set in a Hilbert
space $\mathcal H$. Show that the two statements are
equivalent. (You may use what we have proved for o.n. sets in
general; for example, Bessel's inequality, minimization properties,
etc.)
 $U$ is maximal in the sense that there is no nonzero vector in
$\mathcal H$ that is orthogonal to $U$. (Equivalently, $U$ is not a
proper subset of any other o.n. set in $\mathcal H$.)
 Every vector in $\mathcal H$ may be uniquely represented as the
series $f=\sum_{j=1}^\infty \langle f, u_j\rangle u_j$.
Assignment 5  Due Wednesday, October 4, 2017.
 Read sections 2.2.22.2.4 and the notes on
Approximation
of Continuous Functions.
 Do the following problems.
 Section 2.2: 8(a,b,c,d) (FYI: the formula for $T_n(x)$ has an
$n!$ missing in the numerator), 9
 This problem is aimed at showing that the Chebyshev polynomials
form a complete set in $L^2_w$, which has the weighted inner product
\[ \langle f,g\rangle_w := \int_{1}^1
\frac{f(x)\overline{g(x)}dx}{\sqrt{1  x^2}}. \]
 Show that the continuous functions are dense in $L^2_w$. Hint: if
$f\in L^2_w$, then $ \frac{f(x)}{(1  x^2)^{1/4}}$ is in $L^2[1,1]$.
 Show that if $f\in L^\infty[1,1]$, then $\f\_w \le
\sqrt{\pi}\f\_\infty$.
 Follow the proof given in
the notes on Orthonormal
Sets and Expansions showing that the Legendre polynomials form a
complete set in $L^2[1,1]$ to show that the Chebyshev polynomials
form a complete orthogonal set in $L^2_w$.
 Let $\delta>0$. We define the modulus of continuity for $f\in
C[0,1]$ by $\omega(f,\delta) := \sup_{\,\,st\,\,\le\,
\delta,\,s,t\in [0,1]}f(s)f(t)$.
 Explain why $\omega(f,\delta)$ exists for every $f\in C[0,1]$.
 Fix $\delta>0$. Let $S_\delta = \{ \epsilon >0 \colon f(t)  f(s)
< \epsilon \forall\ s,t \in [0,1], \ s  t \le \delta\}$. In other
words, for given $\delta$, $S_\delta$ is in the set of all
$\epsilon$ such that $f(t)  f(s) < \epsilon$ holds for all $s 
t\le \delta$. Show that $\omega(f, \delta) = \inf S_\delta$
 Show that $\omega(f,\delta)$ is non decreasing as a
function of $\delta$. (Or, more to the point, as $\delta \downarrow 0$,
$\omega(f,\delta)$ gets smaller.)
 Show that $\lim_{\delta \downarrow 0} \omega(f,\delta) = 0$.

 Let g be C^{2} on an interval
[a,b]. Let h = b − a. Show that if g(a) = g(b) = 0, then $
\g\_{C[a,b]} \le (h^2/8)
\g''\_{C[a,b]}$. Give an example that shows
that $1/8$ is the best possible constant.
 Use the previous part to show that if f ∈
C^{2}[0,1], then the equally spaced linear spline interpolant
f_{n} satisfies
f −
f_{n}_{C[0,1]} ≤ (8n^{2})^{ −
1} f′′_{C[0,1]}
Assignment 6  Due Wednesday, October 11, 2017.
 Read section 2.2.7
 Do the following problems.
 Section 2.2: 14
 This problem is aimed at proving the RiemannLebesgue Lemma, using
the Weierstrass Approximation Theorem (WAT).
 Show that if $p(x) = \sum_{k=0}^n a_k x^k$, then $\lim_{\lambda
\to \infty} \int_a^b p(x) e^{i\lambda x}dx=0$. (Hint: integrate by
parts and then use induction.)
 Use the Weierstrass Approximation Theorem to show that the
polynomials are dense in $L^1[a,b]$.
 Use (a), (b) and the approximation argument from the notes
on
Fourier series to complete the proof.
 Compute the Fourier series for the following functions.
 f(x) = x, 0≤ x ≤ 2π
 f(x) = x, − π ≤ x ≤ π
 f(x) = e^{2x}, − π ≤ x ≤ π (complex form).
 Compute the complex form of the Fourier series for $f(x) =
e^{2x}$, $0 \le x \le 2\pi$. Why is this different from 3(c) above?
Use this Fourier series and Parseval's theorem to sum the series
$\sum_{k=\infty}^\infty (4+k^2)^{1}$.
 Prove this: Let $g$ be a $2\pi$ periodic function (a.e.) that
is integrable on each bounded interval in $\mathbb R$. Then,
$\int_{\pi+c}^{\pi+c} g(u)du$ is independent of $c$. In particular,
$\int_{\pi+c}^{\pi+c} g(u)du=\int_{\pi}^\pi g(u)du$.
 The following problem is aimed at showing that
$\{e^{inx}\}_{n=\infty}^\infty$ is complete in $L^2[\pi,\pi]$.
 Consider the series ∑_{n} c_{n}
e^{inx}, where ∑_{n} c_{n} <
∞. Show that ∑_{n} c_{n} e^{inx}
converges uniformly to a continuous function f(x) and that the series
is the Fourier series for f. (It's possible for a trigonometric
series to converge pointwise to a function, but not be
the Fourier series for that function.)
 Use the previous problem to show that if $f$ is a continuous,
piecewise smooth $2\pi$periodic function, then the FS for $f$
converges uniformly to $f$. (Hint: Show that if $f'\in L^2[\pi,\pi]$,
then series $\sum_{k=\infty}^\infty k^2c_k^2$ is convergent.)
 Apply this result to show that the FS for a linear spline $s(x)$,
which satisfies $s(0)=s(2\pi)$, is uniformly convergent to
$s(x)$. Show that such splines are dense in $L^2[\pi,\pi]$.
 Show that $\{e^{inx}\}_{n=\infty}^\infty$ is complete in
$L^2[\pi,\pi]$.
Assignment 7  Due Monday, October 30, 2017.
 Read sections 3.1, 3.2 and my notes
on Xray
Tomography and on Bounded
Operators & Closed Subspaces.
 Do the following problems.
 Section 2.2: 25(a,b), 26(b), 27(a)
 Let $S^{1/n}(1,0)$ be the space of piecewise linear splines, with
knots at $x_j=j/n$, and let $N_2(x)$ be the linear Bspline ("tent
function", see Keener, p. 81 or my notes on splines.)
 Let $\phi_j(x):= N_2(nx +1 j)$. Show that
$\{\phi_j(x)\}_{j=0}^n$ is a basis for $S^{1/n}(1,0)$.
 Let $S_0^{1/n}(1,0):=\{s\in S^{1/n}(1,0):s(0)=s(1)=0\}$. Show that
$S_0^{1/n}(1,0)$ is a subspace of $S^{1/n}(1,0)$ and that
$\{\phi_j(x)\}_{j=1}^{n1}$ is a basis for it.
 Let $H_0$ be the set of all $f\in C^{(0)}[0,1]$ such that
$f(0)=f(1)=0$ and that $f'$ is piecewise continuous. Show that
$\langle f,g\rangle_{H_0} :=\int_0^1f'(x)g'(x)dx$ defines a real
inner product on $H_0$.
 We want to use a Galerkin method to numerically solve the
boundary value problem (BVP): −u" = f(x), u(0) = u(1) = 0,
f ∈ C[0,1]
 Weak form of the problem. Let H_{0} be as in the
previous problem. Suppose that $v\in H_0$. Multiply both sides of
the equation above by $v$ and use integration by parts to show that
$ \langle u,v\rangle_{H_0} = \langle f,v\rangle_{L^2[0,1]}$. This
is called the ``weak'' form of the BVP.
 Conversely, suppose that u ∈ H_{0} is also in
C^{(2)}[0,1] and that u satisfies
⟨u,v⟩_{H0} = ∫_{0}^{1} f(x)
v(x) dx for all v ∈ H_{0}.
Show that u satisfies the BVP.
 Note that $S_0:=S_0^{1/n}(1,0)$ is a subspace of $H_0$ and let
$s_n\in S_0$ satisfy $\us_n\_{H_0} = \min_{s\in S_0}\u 
s\_{H_0}$; thus, $s_n$ is the leastsquares approximation to u from
∈ S_{0}. Expand $s_n$ in the basis from problem 2(b):
$s_n = \sum_{j=1}^{n1}\alpha_j\phi_j$. Use the normal equations and
part (a) above to show that the $\alpha_j$'s satisfy $G\alpha =
\beta$, where $\beta_j= \langle f,\phi_j\rangle_{L^2[0,1]}$ and $G_{kj}
=\langle \phi_j,\phi_k\rangle_{H_0}$
 Show that
$
G=\begin{pmatrix} 2n& n &0 &\cdots &0\\
n & 2n& n &0 &\cdots \\
0&n& 2n& \ddots &\ddots \\
\vdots &\cdots &\ddots &\ddots &n\\
0 &\cdots &0 &n &2n
\end{pmatrix}
$
 Let V be a Banach space. Show that a linear operator L:V → V
is bounded if and only if L is continuous.
 Consider the Sobolev space $H^1[0,1]$, with the inner product
$\langle f, g\rangle_{H^1} := \int_0^1 \big(f(x)\overline {g(x)} +
f('x)\overline {g'(x)}\big)dx$. For $f\in H^1$, let $Df=f'$. Show that
$D:H^1[0,1]\to L^2[0,1]$ is bounded, and that $\D\_{H^1 \to L^2}=1$.
Assignment 8  Due Wednesday, November 8, 2017
 Read sections 3.3, 3.4, and my notes on Bounded
Operators & Closed Subspaces;
The projection theorem, the Riesz representation theorem, etc.
 Do the following problems.
 Section 3.2: 3(d) (Assume the appropriate
operators are closed and that λ is real.)
 Section 3.3: 2 (Assume the appropriate
operators are closed and that λ is real.)
 Let $k(x,y)$ be defined by
\[
k(x,y) = \left\{
\begin{array}{cl}
y, & 0 \le y \le x\le 1, \\
x, & x \le y \le 1.
\end{array}
\right.
\]

Let $L$ be the integral operator $L\,f = \int_0^1 k(x,y)f(y)dy$. Show
that $L:C[0,1]\to C[0,1]$ is bounded and that the norm
$\L\_{C[0,1]\to C[0,1]}\le 1$. Bonus (5 pts.): Show that
$\L\_{C[0,1]\to C[0,1]}=1/2$.
 Show that $k(x,y)$ is a HilbertSchmidt
kernel and that $\L\_{L^2\to L^2} \le \sqrt{\frac{1}{6}}$.
 Finish the proof of the Projection Theorem: If for every $f\in
\mathcal H$ there is a $p\in V$ such that $\pf\=\min_{v\in
V}\vf\$ then $V$ is closed.
 Let L be a bounded linear operator on Hilbert space $\mathcal
H$. Show that these two formulas for $\L\$ are equivalent:
 $\L\ = \sup \{\Lu\ : u \in {\mathcal H},\ \u\ = 1\}$
 $\L\ = \sup \{\langle Lu,v\rangle : u,v \in {\mathcal H},\
\u\=\v\=1\}$

Let H_{0} be the inner product space defined
problem 3, HW7. This becomes a Hilbert space if we require $f'$ to
be in $L^2[0,1]$.
 Show that if $f\in H_0$, then $\f\_{C[0,1]} \le
\f\_{H_0}$. Thus, $f$ is continuous and H_0 \subset
C[0,1]$. (Showing continuity actually requires a little more work.)
 Show that $\delta_y(f) := f(y)$ is a bounded linear functional on
$C[0,1]$. ($\delta_y$ is the Dirac $\delta$function; its also called
the point evaluation functional.)
 Show that for all $f\in H_0$ there is a function $G_y$ in $H_0$
for which $f(y) = \langle f, G_y\rangle_{H_0}$. ($G(x,y):= G_y(x)$ is
called a reproducing kernel and H_{0} is
the reproducing kernel Hilbert space associated with $G(x,y)$.)
 Show that $G_x(y) = \big\langle G_x,G_y\big\rangle_{H_0}$;
equivalently,
\[
G(y,x)=\big\langle G(\cdot,x),G(\cdot,y)\big\rangle_{H_0}.
\]
Use this to show that $G(x,y)=G(y,x)$.
 Let $X:=\{0\le x_1 < x_2 < \cdots < x_n \le 1\}$. Show that the
$n\times n$ matrix $A_{j,k}:=G(x_j,x_k)$ is self adjoint and positive
definite.
 Let $X:=\{0\le x_1 < x_2 < \cdots < x_n \le 1\}$. Suppose that
$f\in H_0$ and $y_j:=f(x_j)$. Show that there are unique coefficients $c_k$ for which
$s(x) := \sum_{k=1}^n c_k G(x,x_k)$ interpolates $f$ at the points in
$X$; i.e., $s(x_j)=y_j$, $j=1,\ldots n$.
Assignment 9  Due Wednesday, November 15, 2017.
 Read sections 3.5, and my notes on Compact
Operators, and on
Closed Range Theorem.
 Do the following problems.
 Section 3.4: 2(b)
 Show that every compact operator on a Hilbert space is a bounded
operator.
 Consider the Hilbert space $\mathcal H=\ell^2$ and let
$S=\{x=(\ldots x_{2}\ x_{1}\ x_0\ x_1 \ldots)\in \ell^2:
\sum_{n=\infty}^\infty (n^2+1)x_n^2 <1\}$. Show that $S$ is a
precompact subset of $\ell^2$.
 A sequence {f_{n}} in a Hilbert space H is said to
be weakly convergent to f ∈ H if and only if lim_{ n
→ ∞} < f_{n},g> = < f,g> for every
g∈H. When this happens, we write f = wlim f_{n}. For
example, if {φ_{n}} is any orthonormal sequence, then
φ_{n} converges weakly to 0. You are given that every
weakly convergent sequence is a bounded sequence (i.e. there is a
constant C such that f_{n} ≤ C for all n). Prove the
following:
Let K be a compact linear operator on a Hilbert space
H. If f_{n} converges weakly to f, then Kf_{n}
converges to Kf — that is, lim_{ n → ∞} 
Kf_{n}  Kf  = 0.
Hint: Suppose this doesn't happen, then there will be a subsequence of
{f_{n}}, say {f_{nk}}, such that 
Kf_{nk}  Kf  ≥ ε for all k. Use this
and the compactness of K to arrive at a contradiction. We remark that
the converse is also true. If a bounded linear operator $K$ maps
weakly convergent sequences into convergent sequences, then $K$ is
compact.
 Consider the finite rank (degenerate) kernel k(x,y) =
φ_{1}(x)ψ_{1}(y) +
φ_{2}(x)ψ_{2}(y),
where φ_{1} = 6x3, φ_{2} = 3x^{2},
ψ_{1} = 1, ψ_{2} = 8x − 6. Let Ku=
∫_{0}^{1} k(x,y)u(y)dy. Assume that L =
Iλ K has closed range,

For what values of λ does the integral equation
u(x)  λ∫_{0}^{1} k(x,y)u(y)dy =f(x)
have a solution for all f ∈ L^{2}[0,1]?
 For these values, find the solution u = (I −
λK)^{−1}f — i.e., find the resolvent.
 For the values of λ for which the equation
does not have a solution for all f, find a condition on f
that guarantees a solution exists. Will the solution be unique?
 In the following, H is a Hilbert space and B(H) is the set of
bounded linear operators on H. Let L be in B(H) and let N:= sup
{< Lu, u> : u ∈ H, u = 1}.
 Verify the identity < L(u+αv), u+αv> − <
L(uαv), uαv> = 2α<
Lu,v>+2α< Lv,u>, where α = 1.
 Show that N ≤ L.
 Let L be a selfadjoint operator on H,
which may be real or complex. Use (a) and (b) to show that N=
L. (Hint: In the complex case, choose α so
that α< Lu,v> = <
Lu,v>. For the real case, use $\alpha=\pm 1$, as required.)
 Suppose that H is a complex Hilbert space. If L ∈
B(H), then use (a) and (b) to show that
N ≤ L ≤ 2N.
 For the real Hilbert space, H = R^{2}, let $L =
\begin{pmatrix}
0& 1\\
1 & 0 \end{pmatrix}.
$
Show that $L = 1$, but $N=0$.
Assignment 10  Due Monday, November 27, 2017.
 Read sections 3.5, 3.6, 4.1 and my notes on
Spectral Theory for Compact Operators.
 Do the following problems.
 Section 3.4: 2(c), 6 (The condition in 6 should be
λμ_{i} ≠ 1.)
 Section 3.5: 1(b), 2(b)
 (This is a variant of problem 3.4.3 in Keener.) Consider the
operator $Ku(x) = \int_{1}^1 (1xy)u(y)dy$ and the eigenvalue
problem $\lambda u = Ku$.
 Show that $K$ is a selfadjoint, HilbertSchmidt operator.
 Let $f\in C[1,1]$. If $v= Kf$, show that $v''=2f$,
$v(1)+v(1)=0$, and $v'(1)+v'(1)$.
 Use the previous part to convert the eigenvalue problem $\lambda
u = Ku$ into this eigenvalue problem:
\[
\left\{
\begin{align}
u''+&\frac{2}{\lambda} u =0,\\
u(1)+&u(1) =0 \\
u'(1)+ &u'(1)=0.
\end{align}
\right.
\]
Solve the eigenvalue above to get the eigenvalues and eigenvectors of
$\lambda u = Ku$. Show that the eigenvectors form a complete set for
$L^2[1,1]$.
 Let $L\in \mathcal B(\mathcal H)$. Suppose that for all $f\in
N(L)^\perp$ there is a constant $c>0$ such that $\Lf\\ge c\f\$, where
$c$ is independent of $f$. Show that $R(L)$ is closed.
 Let K be a compact, selfadjoint
operator on a Hilbert space H, and let M be the closure of the span of
the set of eigenvectors {φ_{j}} corresponding to all
eigenvalues of K such that λ_{j} ≠ 0. (Note: both M
and M^{⊥} may be infinite dimensional.) You will need the
following:
Definition: A subspace $U$ of a Hilbert space $\mathcal
H$ is invariant under an operator $L$ if and only if $L$ maps $U$ into
itself.
 Show that M and M^{⊥} are both invariant under $K$.
 Show that K restricted to M^{⊥} is compact.
 Show that either M^{⊥} = {0} or that
M^{⊥} is the eigenspace for λ = 0.
 Show that one may choose a complete, orthonormal set for H from
among the eigenvectors of K, including ones for $\lambda=0$. (Use
Proposition 2.6 in
Spectral Theory for Compact Operators.)
 Finish the proof of Lemma 2.5 in
Spectral Theory for Compact Operators.
Assignment 11  Due Wednesday, December 6, 2017.
 Read sections 3.6, 4.1, 4.2, 4.3.1, 4.3.2, 4.5.1. and my notes on
Examples problems for distributions.
 Do the following problems.
 Section 4.1: 4, 7
 Section 4.2: 1, 3, 4
 Section 4.3: 3
 Show that the fixed point found in the Contraction Mapping Theorem
is unique.
 Let $F:C[0,1]\to C[0,1]$ be defined by $F[u](t) :=
\int_0^1(2+st+u(s)^2)^{1}ds$, $0\le t\le 1$. Let $\ \cdot
\:=\\cdot \_{C[0,1]}$. Let $B_r:=\{u\in C[0,1]\,\, \u\\le
r\}$.
 Show that $F: B_1\to B_{1/2}\subset B_1$.
 Let $D$ be an open subset of a Banach space $V$. We say that a
map $G:D\to V$ is Lipschitz contraction on $D$ if and only
if there is a constant $0\le \alpha$<1 such that $\G[u]G[v]\\le
\alpha \uv\$. Show that $F$ is a Lipschitz contraction on $B_1$,
with Lipschitz constant $\alpha \le 1/2$.
 Show that $F$ has a fixed point in $B_1$.
 Let $L$ be in $\mathcal B (\mathcal H)$.
 Let $A$ and $B$ be in $\mathcal B (\mathcal H)$. Show that
$\AB\\le \A\\,\B\$. Use this to show that $\L^k\ \le \L\^k$,
$k=2,3,\ldots$.
 Suppose that $\lambda \L\<1$. In class (11/29/17), we showed
that the truncation error $E_n$ satisfies
\[
E_n=\big\(I  \lambda L)^{1}  \sum_{k=0}^{n1}\lambda^k L^k\big\ \le
\frac{\lambda^n \L\^n}{1  \lambda \L\}.
\]
Let $L$ be as in
problem 3, HW8. Use the bound on $\L\$ in 3(b) to estimate how
many terms of the Neumann expansion would be required to approximate
$(I  \lambda L)^{1}$ to within $10^{8}$, if $\lambda\le 0.2$.
 Let $Lu=u''$, $u(0)=0$, $u'(1)=2u(1)$.
 Show that the Green's function
for this problem is
\[
G(x,y)=\left\{
\begin{array}{rl}
(2y1)x, & 0 \le x < y \le 1\\
(2x1)y, & 0 \le y< x \le 1.
\end{array} \right.
\]
 Let $Kf(x) := \int_0^1G(x,y)f(y)dy$. Show that $K$ is a selfadjoint
HilbertSchmidt operator, and that $0$ is not an eigenvalue of $K$.
 Use (b) and the spectral theory of compact operators to show the
orthonormal set of eigenfunctions for $L$ form a complete set in
$L^2[0,1]$.
Updated 11/29/2017.