Math 641-600 Fall 2013
Assignments
Assignment 1 - Due Thursday, September 5.
- Let $V$ be a real finite dimensional inner product space, with
inner product $\langle \cdot,\cdot \rangle_V$, and let
$B=\{v_1,v_2,\ldots,v_n\}$ be an ordered basis for $V$.
- If $\Phi$ is the associated coordinate map, show that $\langle
\cdot,\cdot \rangle_{\mathbb R^n} := \langle
\Phi^{-1}(\cdot),\Phi^{-1}(\cdot) \rangle$ defines an inner product
on $\mathbb R^n$.
- Show that if $\mathbf x, \mathbf y \in \mathbb R^n $, then
$\langle \mathbf x,\mathbf y \rangle_{\mathbb R^n} = \mathbf
y^TG\mathbf x$, where $G_{jk} = \langle v_k,v_j \rangle_V$. (The
matrix is called the Gram matrix for $B$.)
- In the previous problem, suppose that $B=\{v_1,v_2,\ldots,v_n\}$
is simply a subset of vectors in $V$ and that $U=\text{span}(B)$.
Show that $B$ is a basis for $U$ (i.e. that $B$ is linearly
independent) if and and only if $\mathbf y^TG\mathbf x$ is an inner
product for $\mathbb R^n$.
- Let U be a subspace of an inner product space V, with the inner
product and norm being < ·,· > and ||·||
Also, let v ∈ V. (Do not assume that U is finite
dimensional or use arguments requiring a basis.)
- Fix $v\in V$. Show that $p\in U$ satisfies $ \min_{u\in U}\|v-u\|
= \|v-p\|$ if and only if $v-p$ is orthogonal to the subspace $U$.
- Show that $p$ is unique, given that it exists for $v$.
- Suppose $p$ exists for every $v\in V$. Since $p$ is uniquely
determined by $v$, we may define a map $P: V \to U$ via
$Pv:=p$. Show that $P$ is a
linear map and that $P$ satisfies $P^2 = P$. ($P$ is called
an orthogonal projection. The vector $p$ is the orthogonal
projection of $v$ onto $U$.)
- Let $V$, $B$, $U$ and $G$ be as in problem 1. ($B$ is assumed to
be a basis for $U$.)
- Let $v\in V$ and $d_k =: \langle v,v_k\rangle_V$. Show that
$p=\sum_j x_j v_j\in U$ is the orthogonal projection of $v$ onto $U$
if and only if the $x_j$'s satisfy the normal equations,
$d_k = \sum_j G_{kj}x_j$.
- Show that the orthogonal projection $P:V\to U$ exists.
- Show that if B is orthonormal, then $Pv=\sum_j \langle v,v_j\rangle_V
v_j$.
- Equality holds in Schwarz's inequality if and only
if $\{u,v\}$ is linearly dependent.
- Suppose that $F\in C[0,1]$, $F(x)\ge 0$, and $F(x_0)>0$ for some
$x_0\in [0,1]$. Show that there is a closed interval $[a,b]\subseteq
[0,1]$, $a\ne b$, that contains $x_0$ and on which $F(x)\ge \frac12 F(x_0)$.
- Let $B=\{v_1,\ldots,v_n\}$ be a basis for a vector space
$V$. Define linear functionals $\varphi_k$, $k=1,\ldots, n$, via
$\varphi_k(v_j)= \delta_{jk}$, where $\delta_{jk}$ is the Kronecker
$\delta$.
- Show that $B^\ast := \{\varphi_1, \ldots, \varphi_n\}$ is a basis
for $V^\ast$. ($B^*$ is called the dual basis for $B$.)
- Let $V=\mathbb R^n$ and suppose that $B\;=\;\{\mathbf x_1,
\ldots,\mathbf x_n\}$ is a basis of column vectors for $\mathbb
R^n$, and let $X=[\mathbf x_1 \cdots \mathbf x_n]$. Show that
${\mathbb R^n}^\ast$ may be identified with the set of $1\times n$
row vectors, and that $B^\ast$ is then the set of rows of $X^{-1}$.
Assignment 2 - Due Thursday, September 12.
- This problem concerns several important inequalities.
- Show that if α, β are positive and α + β
=1, then for all u,v ≥ 0 we have
uαvβ ≤ αu + βv.
- Let x,y ∈ Rn, and let p > 1 and define
q by q-1 = 1 − p−1. Prove Hölder's inequality,
|∑j xjyj| ≤ ||x||p
||y||q.
Hint: Using the inequality in part (a). first prove it for ||x||p = ||y||q = 1. Scale to get the final inequality.
- Suppose $\varphi=[y_1 \ldots y_n] \in {\ell^{p}}^*$.
Hölder's inequality implies that $\|\varphi\|_{\ell^{p*}}\le
\|y\|_q$. Show that we actually have $\|\varphi\|_{\ell^{p*}} =
\|y\|_q$.
- Let x,y ∈ Rn, and let p > 1. Prove
Minkowski's inequality,
||x+y||p ≤ ||x||p + ||y||p.
Use this to show that ||x||p defines a norm on
Rn. Hint: you will need to use Hölder's
inequality, along with a trick.
- $L^2$ minimization. Find the straight line $y=a+bx$ that minimizes $\int_0^1 (e^{-x} - a - bx)^2dx$.
- $L^1$ minimization. Find the straight line $y=a+bx$ that minimizes $\int_0^1 |e^{-x} - a - bx|dx$, by following these steps.
- Whatever the minimizer is, geometric considerations show that $e^{-x}$ and $a+bx$ will cross at two points, 0 < s < t < 1. Find these two points by minimizing, over $a,b$, the area $A$ between the $f(x)$ and $a+bx$:
\[
A=\int_0^1 |e^{-x} - a - bx|dx = \int_0^s (e^{-x} - a - bx)dx + \int_s^t (a+ bx-e^{-x})dx +\int_t^1 (e^{-x} - a - bx)dx.
\]
- Use the crossing conditions $a+bs=e^{-s}$ and $a+bt=e^{-t}$ to find $a$ and $b$.
- Use your favorite software (mine is Matlab) and plot, on the same
set of axes, $e^{-x}$ and the two minimization solutions found in the
previous two problems.
- Let V be a finite dimensional inner product space and let U be a
subspace of V. Recall that the orthogonal complement of U is
U⊥ = {v ∈ V | < v,u > = 0 for
all u ∈ U}
Show that V = U⊕U⊥, where ⊕ symbolizes
the direct sum of vector spaces. Also, show that
(U⊥)⊥ = U.
Assignment 3 - Due Thursday, September 19.
- Suppose that $A$ is an $m\times n$ matrix, with $m>n$, so that
the columns of $A$ are in $\mathbb R^m$. Assume that the rank of $A$
is $n$.
- Use the Gram-Schmidt to find a factorization of $A$ into the form
$A=QR$, where $Q$ is an $m\times n$ matrix whose columns are
orthonormal, and $R$ is an $n\times n$ upper triangular matrix.
- Show that $Q^TQ=I_{n\times n}$ and that the $QQ^T$ is the orthogonal
projection of $\mathbb R^m$ onto the column space of $A$.
- Let $m>n$. Suppose that $\mathbf b\in \mathbb R^m$ and that $A =
[\mathbf a_1 \cdots \mathbf a_n]$ is as in the previous problem. We
want to minimize $\| \mathbf b - \sum_{j=1}^n x_j\mathbf
a_j\|_{\mathbb R^m}=\|\mathbf b - A\mathbf x\|_{\mathbb R^m}$ over
$\mathbf x = [x_1 \ \cdots \ x_n]^T.\,$ Show that the minimizer is
$\mathbf x_0 = R^{-1}Q^T\mathbf b$, where $A=QR$. (Hint: let
$\mathbf z = R\mathbf x$, so that you are minimizing $\| \mathbf b -
Q\mathbf z \|$ over $\mathbf z$. Then, use the normal equations.)
- Fredholm Alternative. Let $V$ and $W$ be finite
dimensional inner product spaces, with inner products $\langle
\cdot,\cdot \rangle_{V}$ and $\langle \cdot,\cdot \rangle_{W}$,
respectively. Also, let $L:V\to W$ be linear.
- Show that $\mathcal R(L) \subseteq \mathcal N(L^\ast)^\perp$,
where $\mathcal R(L)$ is the range of $L$ and $\mathcal N(L^*)$ is the
null space of the adjoint $L^\ast$.
- Show that $\mathcal R(L)= \mathcal N(L^\ast)^\perp$, by
contradiction. If $\mathcal R(L) \ne \mathcal N(L^\ast)^\perp$, then
there is a vector $w \ne 0$ such that $w\in \mathcal R(L)^\perp \cap
\mathcal N(L^\ast)^\perp$. But, if $w\in \mathcal R(L)^\perp \cap
\mathcal N(L^\ast)^\perp$, then $w=0$. (This means that $W=\mathcal
R(L) \oplus \mathcal N(L^*)$.)
- Suppose that $A$ is an $n\times n$ real matrix such that $\mathbf
x^TA\mathbf x>0$ for $\mathbf x\ne \mathbf 0$. Use the Fredholm
Alternative to determine whether $A$ is invertible.
- Let U be a unitary, n×n matrix; that is, U*U = I. Do
the following.
- Show that < Ux, Uy > = < x,
y >.
- Show that the eigenvalues of U all lie on the unit circle,
$|\lambda|=1$.
- Show that U is diagonalizable. (Hint: Modify the proof used in
class to show that a self adjoint matrix is diagonalizable.)
- Suppose that U is real as well as unitary i.e.,
orthogonal. For $n$ odd, show that 1 or − 1 is an eigenvalue of
U. (It's possible to have both simultaneously.)
Assignment 4 - Due Thursday, September 26.
- Let $A$ and $B$ be self-adjoint matrices, which may be real or
complex. We say that $A\le B$ if and only if $\langle A\mathbf
x,\mathbf x\rangle \le \langle B\mathbf x,\mathbf x\rangle$ for all
$\mathbf x$.
- If $\lambda_1\ge \lambda_2,\ldots,\lambda_n$ are the eigenvalues
of $A$ and $\tilde \lambda_1\ge \tilde \lambda_2,\ldots,\tilde
\lambda_n$ are the eigenvalues of $B$, then show that $\lambda_k \le
\tilde \lambda_k$.
- Show that $\text{Trace}(A) \le \text{Trace}(B)$ if $A\le B$.
- Show that if we increase a diagonal entry of $A$, then the
resulting matrix $B$ satisfies $A\le B$.
- (Keener, problem 1.3(b)). Use the previous part to estimate the
lowest eigenvalue of the matrix below. Keener gets $-\frac13$. Using
matlab you get less than about $-2$. Can you beat $-\frac13$?
\[
A=\begin{pmatrix}8 & 4 & 4\\ 4 & 8 & -4 \\ 4 &
-4 & 3\end{pmatrix}
\]
- (This is a generalization of Keener's problem 1.3.5). Let $A$ be
a self-adjoint matrix with eigenvalues $\lambda_1\ge
\lambda_2,\ldots,\ge \lambda_n$. Show that for $ 2\le k < n$ we have
\[ \max_U \sum_{j=1}^k \langle Au_j,u_j \rangle =\sum_{j=1}^k
\lambda_j, \]
where $U=\{u_1,\ldots,u_k\}$ is any o.n. set. (Hint: Put $A$ in
diagonal form and use a judicious choice of $B$.)
- Show that $\ell^\infty$ is a Banach space under the norm
$\|\{x_j\}\|= \sup_j |x_j|$
- Show that $\ell^2$ is a Hilbert space under the inner product
\[
\langle \{x_j\},\{y_j\} \rangle :=\sum_{j=1}^\infty \bar y_j x_j.
\]
- Let $0\le \delta \le 1$. We define the modulus of continuity for
$f\in C[0,1]$ by
\[
\omega(f;\delta) := \sup_{|\,s-t\,|\,\le\, \delta}|f(s)-f(t)|,
\ \text{where }\ s,t \in [0,1].
\]
- Explain why $\omega(f;\delta)$ exists for every $f\in C[0,1]$.
- Fix δ. Let Sδ = { ε > 0 | |f(t)
− f(s)| < ε for all s,t ∈ [0,1], |s − t|
≤ δ}. In other words, for given δ, ε is in the
set if |f(t) − f(s)| < ε holds for all |s − t|
≤ δ. Show that
ω(f;δ) = inf Sδ
- Show that ω(f;δ) is non decreasing as a
function of δ. (Or, more to the point, as δ ↓ 0,
ω(f;δ) gets smaller.)
- Show that lim δ↓0 ω(f;δ) = 0.
Assignment 5 - Due Thursday, October 3.
- Calculus problem: Let g be C2 on an interval
[a,b]. Let h = b − a. Show that if g(a) = g(b) = 0, then
||g||C[a,b] ≤ (h2/8)
||g′′||C[a,b].
Give an example that shows
that 1/8 is the best possible constant.
- Use the previous problem to show that if f ∈
C2[0,1], then the equally spaced linear spline interpolant
fn satisfies
||f −
fn||C[0,1] ≤ (8n2) −
1 ||f′′||C[0,1]
- Let $0<\alpha<1$ be fixed. Define $f(x) := x^\alpha$, $x\in
[0,1]$. Show that $\omega(f;\delta) \le C\delta^\alpha$. where $C$ is
indepedent of $\delta$.
- Derive the trapezoidal rule for approximating $\int_0^1f(x)dx$,
$f\in C[0,1]$, by integrating the linear spline (in
$S^{\frac{1}{n}}(1,0)$) that interpolates $f$ at $x_j=\frac{j}{n}$,
$j=0,\ldots,n$. Estimate the error involved.
- Let $V$ be a Banach space. Suppose that there is an uncountable
set of vectors $U\subset V$ with the property that if $u,v\in U$, then
there exists $\varepsilon_0>0$ such that $\|u-v\|\ge
\varepsilon_0>0$. Prove that $V$ is non separable. Use this to show
that $L^\infty[0,1]$ is non separable.
- Recall that the B-splines $N_m$ satisfy the recursion relation
\[
N_m(x) = \frac{x}{m-1}N_{m-1}(x)+\frac{m-x}{m-1}N_{m-1}(x-1),
\ m\ge 2.
\]
Use this to show that $N_3(x) = \frac12 \big((x)_+^2 - 3(x-1)_+^2 +
3(x-2)_+^2 - (x-3)_+^2 \big)$. Hint: $(x-a)(x-a)_+^k=(x-a)_+^{k+1}$.
Assignment 6 - Due Tuesday, October 15
- Let yj, $j=1,\dots,n$. Consider the
cubic-Hermite spline s(x) ∈ Sh(3,1), with h = 1/n,
that satisfies s(j/n) = yj and minimizes
$\int_0^1(s'')^2dx$. Show that s(x) is actually in
C(2)[0,1]; that is, show that s(x) ∈
Sh(3,2).
- Variational/Finite-element problem. We want to solve the boundary value
problem (BVP): −u" = f(x), u(0) = u(1) = 0, f ∈ C[0,1].
- Let H be the set of all continuous functions vanishing at x = 0
and x = 1, and having L2 derivatives. Also, let H have the
inner product:
⟨f,g⟩H = ∫01 f
′(x) g ′(x) dx.
Use integration by parts to convert the BVP into its "weak" form:
⟨u,v⟩H = ∫01 f(x)
v(x) dx for all v ∈ H.
- Conversely, suppose that u ∈ H is also in
C(2)[0,1] and that u satisfies
⟨u,v⟩H = ∫01 f(x)
v(x) dx for all v ∈ H.
Show that u satisfies the BVP.
- Let V = Sh(1,0), with h = 1/n. Thus V is spanned by
φj(x) := N2(nx-j+1), j = 1 ...
n-1. (Here, N2(x) is the linear B-spline.) Show that the
least-squares approximation to u from V is y = ∑j
αjφj(x) ∈ V, where the
αj's satisfy Gα = β, with
βj = ⟨ y,φj ⟩H
= ∫01 f(x) φj(x) dx, j=1
... n-1 and Gkj = ⟨ φj,
φk ⟩H.
- Show that Gkj = ⟨ φj,
φk ⟩H is given by
Gj,j = 2n, j = 1 ... n-1
Gj,j-1 = - n, j = 2 ... n-1
Gj,j+1 = - n, j = 1 ... n-2
Gj,k = 0, all other possible k.
- Let $S=\{s\in C^{(2)}(\mathbb R)\,|\, s\ \text{is a cubic on}\
[j,j+1], j\in \mathbb Z\}$. (These are all cubic B-splines with knots
at the integers and defined on all of $\mathbb R$.) Suppose $s\in S$
has compact support in $[0,M]$. Determine the smallest value of $M$
such that $s \not\equiv 0$.
- Let $U:=\{u_j\}_{j=1}^\infty$ be an orthonormal set in a Hilbert
space $\mathcal H$. Show that the two statements are
equivalent. (You may use what we have proved for o.n. sets in
general; for example, Bessel's inequality, minimization properties,
etc.)
- $U$ is maximal in the sense that there is no non-zero vector in
$\mathcal H$ that is orthogonal to $U$. (Equivalently, $U$ is not a
proper subset of any other o.n. set in $\mathcal H$.)
- Every vector in $\mathcal H$ may be uniquely represented as the
series $f=\sum_{j=1}^\infty \langle f, u_j\rangle u_j$.
- Show that every separable Hilbert space has an o.n. basis.
Assignment 7 - Due Thursday, October 24
- $f(x) = e^{x}$, $-\pi < x < \pi$.
- Find the complex form of the Fourier series for $f$.
- Sketch three periods of the $2\pi$-periodic function to which the
series converges pointwise. (Hand-drawn is fine. No need to use a
computer here.)
- Find the sum the series $\sum_{n=0}^\infty \frac{1}{n^2+1}$.
- Estimate the error $\|f-S_N\|_{L^2[-\pi,\pi]}$, where $S_N$ is
the partial sum of the Fourier series for $f$.
- Prove this: Let $g$ be a $2\pi$-periodic piecewise continuous
function. Then, $\int_{-\pi+c}^{\pi+c} g(u)du$ is independent of
$c$. (Remark: This holds for $g$ integrable on each bounded interval
of $\mathbb R$.)
- Use the previous result to show that if $f$ is $2\pi$-periodic
and piecewise smooth, then it has the Fourier series $f(x) \sim a_0 +
\sum_{n=1}^\infty a_n\cos(nx)+b_n\sin(nx)$, where
\[
a_0=\frac{1}{2\pi} \int_0^{2\pi} f(x)dx, \ a_n=\frac{1}{\pi}
\int_0^{2\pi} f(x)\cos(nx)dx, \ b_n = \frac{1}{\pi} \int_0^{2\pi}
f(x)\sin(nx) dx,
\]
Formulate a theorem on the pointwise convergence of the series.
-
Find the Fourier series for $f(x) = x,\ 0 < x < 2\pi$. Sketch three
periods of the $2\pi$-periodic function to which the series converges
pointwise. (Hand-drawn is fine. No need to use a computer here.)
- Find the Fourier series for
$f(x) = \left\{ \begin{array}{cl} 1 & x\in [-\tfrac{1}{4}
\pi , \tfrac{1}{4}\pi], \\
0 & x \in (-\pi, \tfrac{1}{4}\pi) \ \text{or
}x \in (\tfrac{1}{4}\pi , \pi).
\end{array} \right.
$
- Consider the series $\sum_{n=-\infty}^\infty c_n e^{inx}$, where
$\sum_{n=-\infty}^\infty |c_n| <\infty$. Show that the series
$\sum_{n=-\infty}^\infty c_n e^{inx}$ converges uniformly to a
$2\pi$-periodic continuous function $f(x)$ and the the series is the
Fourier series for $f$. Also, show that the series converges to $f$ in
$L^2[-\pi,\pi]$.
Assignment 8 - Due Thursday, November 14
- A measurable function whose range consists of a finite number of values is a simple function. One can also define a simple function as a linear combination of a finite number of characteristic functions of measurable sets. Let $f(x)=x^2$, $-1\le x \le 2$. Find two simple functions
$s_1$ and $s_2$ such that $s_1(x) \le f(x) \le s_2(x)$ and
\[
\int_{-1}^2s_2(x)dx - \int_{-1}^2 s_1(x)dx < 0.01.
\]
How well do these integrals compare with $\int_{-1}^2 f(x)dx$?
- Let F(s) = ∫ 0∞ e − s
t f(t)dt be the Laplace transform of f ∈
L1([0,∞)). Use the Lebesgue dominated convergence
theorem to show that F is continuous from the right at s = 0. That is,
show that
lim s↓0 F(s) = F(0) = ∫
0∞f(t)dt
- Let fn(x) = n3/2 x e-n x, where
x ∈ [0,1] and n = 1, 2, 3, ....
- Verify that the pointwise limit of fn(x) is f(x) = 0.
- Show that ||fn||C[0,1] → ∞ as n
→ ∞, so that fn does not converge uniformly to
0.
- Find a constant C such that for all n and x fixed
fn(x) ≤ C x-1/2, x ∈ (0,1].
- Use the Lebesgue dominated convergence theorem to show that
lim n→∞ ∫ 01
fn(x)dx = 0.
- Let L be a bounded linear operator on Hilbert space $\mathcal H$. Show that
the two formulas for $\|L\|$ are equivalent:
- $\|L\| = \sup \{\|Lu\| : u \in {\mathcal H},\ \|u\| = 1\}$
- $\|L\| = \sup \{|\langle Lu,v\rangle| : u,v \in {\mathcal H},\ \|u\|=\|v\|=1\}$
- Let $V$ be a Banach space and let $L:V\to V$ be linear. Show $L$ is bounded if and only if $L$ is continuous.
-
Consider the boundary value problem $-u''(x)=f(x)$, where $0\le x \le 1$, $\, f\in C[0,1]$, $\, u(0)=0$ and $u'(1)=0$.
-
Verify that the solution is given by $u(x) = \int_0^1 k(x,y)f(y)dy$, where
\[
k(x,y) = \left\{
\begin{array}{cl}
y, & 0 \le y \le x, \\
x, & x \le y \le 1.
\end{array}
\right.
\]
- Let $L$ be the integral operator $L\,f = \int_0^1
k(x,y)f(y)dy$. Show that $L:C[0,1]\to C[0,1]$ is bounded and that the
norm $\|L\|_{C[0,1]\to C[0,1]}\le 1$. Actually, $\|L\|_{C[0,1]\to
C[0,1]}=1/2$. Can you show this?
- Show that $k(x,y)$ is a Hilbert-Schmidt kernel and that
$\|L\|_{L^2\to L^2} \le \sqrt{\frac{1}{6}}$.
Assignment 9 - Due Thursday, November 21
- Finish the proof of the Projection Theorem: If for every $f\in
\mathcal H$ there is a $p\in V$ such that $\|p-f\|=\min_{v\in
V}\|v-f\|$ then $V$ is closed.
- Prove this: If $L:\mathcal H\to \mathcal H$ is a bounded linear transformation, then $\overline{R(L)} = N(L^*)^\perp$.
-
Let $\mathcal H$ be a Hilbert space of functions that are defined on $[0,1]$. In addition, suppose that $\mathcal H \subset C[0,1]$, with $\|f\|_{C[0,1]} \le K\|f\|_{\mathcal H}$ for all $f\in \mathcal H$. (The Sobolev space $H^1$ has this property.)
- Show that the point-evaluation functional $\Phi_x(f) =f(x)$ is a bounded linear functional on $\mathcal H$.
- Let $x$ be fixed. Show that there is a kernel $k(x,y)\in \mathcal H$ such that
\[
\Phi_x(f)=f(x) = \langle f,k(x,\cdot)\rangle
\]
(The kernel $k(x,y)$ is called a reproducing kernel and $\mathcal H$ is called a reproducing kernel Hilbert space.)
- For $x,z$ fixed, show that $k(z,x) = \langle k(z,\cdot),k(x,\cdot)\rangle$. In addition, let $\{x_j\}_{j=1}^n$ be any finite set of distinct points (i.e. $x_j\ne x_k$ if $j\ne k$) in $[0,1]$, show that the matrix $G_{jk} = k(x_k,x_j)$ is positive semidefinite; that is, $\sum_{j,k}c_k\overline{c_j}k(x_k,x_j)\ge 0$
- Suppose the matrix $G$ is positive definite and therefore invertible. Let $f\in \mathcal H$. Show that there are unique coefficients $\{c_j\}_{j=1}^n$ such that $s(x) =\sum_{j=1}^n k(x_j,x)c_j$ interpolates $f$ at the $x_j$'s.
- Consider the finite rank (degenerate) kernel k(x,y) =
φ1(x)ψ1(y) +
φ2(x)ψ2(y),
where φ1 = 2x-1, φ2 = x2,
ψ1 = 1, ψ2 = x. Let Ku=
∫01 k(x,y)u(y)dy. Assume that L =
I-λ K has closed range,
Alermative set of functions: Keep φ1,
φ2, and ψ1 the same. For
ψ2, use ψ2 = 4x − 3.
-
For what values of λ does the integral equation
u(x) - λ∫01 k(x,y)u(y)dy =f(x)
have a solution for all f ∈ L2[0,1]?
- For these values, find the solution u = (I −
λK)−1f i.e., find the resolvent.
- For the values of λ for which the equation
does not have a solution for all f, find a condition on f
that guarantees a solution exists. Will the solution be unique?
-
Consider the Hilbert space $\ell^{\,2}$. Let $S=\{\{a_j\}_{j=1}^\infty
\in \ell^{\,2}\colon \sum_{j=1}^\infty (1+j^2)\,|a_j|^2\le 1 \}$. Show
that $S$ is a compact subset of $\ell^{\,2}$.
Updated 11/15/2013.