AUG | 29 | 31 | |||||||
---|---|---|---|---|---|---|---|---|---|
SEP | 5 | 7 | 12 | 14 | 19 | 21 | 26 | 28 | |
OCT | 3 | 5 | 10 | 12 | 17 | 19 | 24 | 26 | 31 |
NOV | 2 | 7 | 9 | 14 | 16 | 21 | 23 | 28 | 30 |
DEC | 5 |
Addition | Scalar multiplication | |
---|---|---|
u + (v + w) = (u + v) + w | a×(b×u) = (ab)×u | |
Identitiy: u + 0 = 0 + u = u | (a + b)×u = a× u + b×u | |
Inverse: u + (-u) = (-u) + u = 0 | a×(u + v) = a×u + a×v | |
u + v = v + u | 1×u = u |
vj=A1jw1 +
A2jw2 + ... + Anjwn
wk=C1kv1 +
C2kv2
+ ... + Cnkvn
(Note that the sums are over the row index for each matrix A and C.)
For any
vector v with representations
v = b1v1 +...+ bnvn
v = d1w1 +...+ dnwn
and corresponding coordinate vectors
[v]B = [b1,..., bn]T
[v]D = [d1,..., dn]T
we have the change-of-basis formulas
[v]D = A[v]B and [v]B =
C[v]D.
These imply that AC=CA=In×n.
0 0 1 1/3 1/3 -1 -1/3 2/3 2 1/3
r11 r12 r13 ... r1n 0 r22 r23 ... r2n 0 0 r33 ... r3n ... 0 0 0 ... rnnwe have A = QR. If the basis E doesn't change the inner product, then the columns of Q are orthonormal. This is the QR factorization. It also works if we start with an arbitrary matrix and apply Gram-Schmidt to the column space, with the inner product < u, v > = vTu.
Ac=d, where Ajk= < wk, wj >, dj = < v , wj >.
The matrix A is called the Gram matrix for the basis of w's; it is always invertible. (See problem 1, Assignment 4.)
Orthonormal case If the basis {u1, ... ,
un} (u's replace w's) is orthonormal, then A = I and x = d;
that is, the minimizer has the form
u* = < v , u1 > u1+ ... + < v ,
un > un
Finite element method We want to illustrate the finite element
method with a simple example. Solve the ODE - y'' = f(x), y(0) = y(1)
= 0. We take the space V to be all continuous functions g(x) defined
on [0,1] having a piecewise continuous derivative g'(x) and satisfying
g(0) = g(1) = 0. The inner product for this space is < f ,
g > = S01 f'(x)g'(x) dx. The subspace
U comprises all piecewise linear functions in V, with possible
discontinuities in the derivatives at 0, 1/n, 2/n,
... (n-1)/n, 1. These are linear B-splines; the possible
discontinuities are called knots. A (non-orthogonal) basis is
wj(x) = B(n x - j), j = 1, ..., n-1,
where B(x) is the piecewise linear function defined this way:
B(x) = 0 if x < -1 or x > 1;
B(x) = 1 + x if -1 <= x < 0;
B(x) = 1 - x if 0 < x <= 1.
We want to minimize || y - u ||. Relative to {w1, ... ,
wn-1}, the normal equations are Ac = d, where
dj = < y, wj > =
S01 f(x) wj(x) dx (integrate by
parts)
Ajk = < wk, wj >
From here on, one must compute d, A, and solve Ac = d for c. In this
case, one can show that u*(x) is just the the piecewise linear
function that is 0 at x=0, and cj at x= j/n, j=1, .. , n-1,
and 0 at x = 1. One can thus plot u* by ``connecting the dots.''
A specific example will be included in Assignment 4.
Infinite sets of orthonormal vectors See §4 in my notes on Function spaces (PS) (PDF).
cos(t) -sin(t) sin(t) cos(t)
[ L[v] ]D = c1[ L[v1] ]D +
c2[ L[v2] ]D + ... + cn[
L[vn] ]D = ML[v]B
where
ML = [ [ L[v1] ]D, ... , [
L[vn] ]D ]
Sums K+L is defined by (K+L)[v] = K[v] + L[v].
Scalar multiples cL is defined by (cL)[v] = c(L[v]).
Products If K : V -> U and L : U -> W are linear, then we define LK via LK[v] = L[K[v]]. (This is composition of functions). The transformation defined this way, LK, is linear, and maps V -> W. Note: LK is not in general equal to KL.
Inverses Let L : V -> V be linear. As a function, if L is both one-to-one and onto, then it has an inverse K V -> V. One can show that K is linear, and LK = KL = I, the identity transformation. We write K = L-1.
Associated matrices
Polynomials in L : V -> V We define powers of L in the
usual way: L2 = LL, L3 = LLL, and so on. A
polynomial in L is then the transformation
p(L) = a0I + a1L + ... + amLm
Later on we will encounter the Cayley-Hamilton theorem, which says
that if V has dimension n, then there is a degree n (or less)
polynomial p for which p(L) is the 0 transformation.
...
xn = det([a1,a2,..., an-1, y])/det(A).
2 -3 1 1 -2 1 1 -3 2has eigenvalues 0, 1 (repeated twice), and corresponding eigenvectors
1 -1 3 1 0 1 1 1 0then S-1 A S =
0 0 0 0 1 0 0 0 1
0 0 0 0 -2 0 0 0 -6
1 2 0 1This is not diagonalizable. We can see this directly simply by noting that it has only 1 as an eigenvalue, and for T to be diagonalizable MT would have to be similar to the identity, I. One can also see this by noting that, up to multiples, the only eigenvector is e1, which doesn't form a basis for R2.
T1,1 | 0 | 0 | ... | 0 |
0 | T2,2 | 0 | ... | 0 |
... | ... | ... | ... | ... |
0 | 0 | 0 | ... | Tr,r |
z7 | * | * | * |
0 | z7 | * | * |
0 | 0 | z7 | * |
0 | 0 | 0 | z7 |
3 | 1 | 0 | 0 | 0 | 0 |
0 | 3 | 1 | 0 | 0 | 0 |
0 | 0 | 3 | 1 | 0 | 0 |
0 | 0 | 0 | 0 | 3 | 1 |
0 | 0 | 0 | 0 | 0 | 3 |
Af1 = zf1
Af2 = zf2 + f1
...
Afm = zfm + fm-1
2 1 -1 0 2 3 0 0 2We begin with the eigenvector, f1 = (1,0,0)T. Solving (A - 2I)f2 = f1 gives f2 = (0,1,0)T. Finally, solving (A - 2I)f3 = f2 gives f3 = (1/3,1/3,0)T. Thus, S-1AS = J3(2), where S = [f1, f2, f3]
s1 | 0 | 0 | ... | 0 | ... | 0 |
0 | s2 | 0 | ... | 0 | ... | 0 |
... | ... | ... | ... | ... | ... | ... |
0 | 0 | 0 | ... | sr | ... | 0 |
0 | 0 | 0 | ... | 0 | ... | 0 |
... | ... | ... | ... | ... | ... | ... |
0 | 0 | 0 | ... | 0 | ... | 0 |
Then we note that
|| Sz - c ||2 = SUMk = 1...r
(skzk - ck )2 + SUMk
= r+1...n ck2.
Choosing zk = ck/sk for k = 1 ... r and zk = 0 for k = r+1 ... n not only solves the problem, but also gives the solution x = Vz with smallest length || x || = || z ||.
ATA vk = zk vk
vkT AT A vk =
zk vkT vk
|| A vk ||2 = zk || vk
||2
|| A vk ||2 = zk (||
vk || =1 )
This calculation also shows that a vector v is in the null space of A
if and only if it is an eigenvecor corresponding to the eigenvalue
0. If r = rank(A), then "rank + nullity = # of columns" tells us that
the the nullity(A) = n - r. This means that there are r eigenvectors
for the remaining eigenvalues. List these as z1 >=
z2 >= ... >= zr > 0. Our input basis is
now chosen as {v1, ..., vr, vr+1,
..., vn }. The numbering is the same as that for the
eigenvalues. We now define the matrix V via
V = [ v1 ... vr vr+1
... vn ].
S = [ [z11/2 u1]U [z21/2 u2]U ... [zk1/2 ur]U [ 0 ]U ... [ 0 ]U ]
S = [ z11/2 [u1]U ... zr1/2 [ur]U 0 ... 0 ]
S = [ z11/2 e1 ... zr1/2 er 0 ... 0 ].
If we let sk = zk1/2 for k = 1, ..., r, we get the same S as the one given in the statement of the theorem. These sk's are the singular values of A. The matrix S is related to A via multiplication by change-of-basis matrices. The matrix U changes from new output to old output bases, and V changes from new input to old input bases. Since VT = V-1, we have that VT changes from old input to new input bases. In the end, this gives us A =USVT
2 -2 1 1 -2 2Here, ATA =
9 -7 -7 9The eigenvalues of this matrix are z1 = 16 and z2 = 2. The singular values are s1 = 4 and s2 = 21/2. We can immediately write out what S is. We have S =
4 0 0 21/2 0 0The eigenvector corresponding to 16 is v1 = 2-1/2(1,-1)T, and the one corresponding to 4 is v2 = 2-1/2(1,1)T. Hence, we see that V =
2-1/2 2-1/2 -2-1/2 2-1/2Next, we find the u's.
u1 = A v1 / z11/2
u1 = 2-1/2 (4, 0, -4)T/4
u1 =( 2-1/2 , 0, - 2-1/2 )T.
A simlar calculation gives us
u2 =( 0, 1, 0 )T.
We now have to add to these to a "fill" vector
u3 =( 2-1/2 , 0, 2-1/2 )T
to complete the new output basis. This finally yields U =
2-1/2 0 2-1/2 0 1 0 -2-1/2 0 2-1/2
2 | -1 | 4 |
1 | 1 | 1 |
-1 | 3 | 2 |
2 | -1 | 4 |
0 | 3/2 | -1 |
0 | 5/2 | 4 |
2 | -1 | 4 |
0 | 3/2 | -1 |
0 | 0 | 17/3 |
1 | 0 | 0 |
1/2 | 1 | 0 |
-1/2 | 5/3 | 1 |
Space | Basis | Components |
---|---|---|
V | covariant | contravariant |
V* | contravariant | covariant |
(dy1)2 + (dy2)2 + (dy3)2 - (dx1)2 -(dx2)2 - (dx3)2 = SUMj,k 2sj,kdxjdxk
where sj,k = ½(Dxjuk + Dxkuj + SUMi DxjuiDxkui) is the strain tensor. (In the linear theory of elasticity, the products are neglected.) The strain tensor is also second order, but it is purely covariant.
(ds)2 = SUMj,k gj,k dwjdwk.
The tensor gj,k is called the metric tensor. It is purely covariant, and has order 2.
(ds)2 = dwTJTgu J
dw = dwTgw dw.
Consequently, we have that gw =
JTgu J. This also can be derived via the
covariance of the metric tensor. The matrices involved are all square and
n×n. Taking the determinants for both sides, and then taking
square roots yields
det(gw)1/2 = |det(J)| det(gu )1/2, J = u´(w)
The quantity det(J) is called the Jacobian determinant, and is often writtem as
.
det(gu )1/2 du1...dun = det(gu )1/2 |det(u´(w))| dw1...dwn.
Recall that we also have det(gw)1/2 = |det(J)| det(gu )1/2, J = u´(w). Substituting this into the last equation then gives us
det(gu )1/2 du1...dun = det(gw )1/2 dw1...dwn,
which shows that the combination det(gu )1/2 du1...dun is invariant under coordinate transformations. This is often called the invariant volume element.
Vdt
f1du1
f2du2
The mass of the fluid crossing the base in time t to dt is then
density×volume, or
(µVdt)·f1×f2
du1du2
Thus the mass per unit time crossing the base is
F·N du1du2, where F
= µV, and N =
f1×f2 is the standard
normal. Recall that the area of the surface element is dS =
|N|du1du2. Consequently the mass per unit
time crossing the base is F·n dS, where n
is the unit normal. Integrating over the whole surface then yields
This surface integral is called the the flux of the vector field F.
Green's Theorem Let C be a piecewise smooth simple closed curve that is the boundary of its interior region R. If F(x,y) = A(x,y)i + B(x,y)j is a vector-valued function that is continuously differentiable on and in C, then
To state this theorem, we also need to define the curl
of a vector field
F(x)=A(x,y,z)i + B(x,y,z)j +C(x,y,z)k.
We will assume that F has continuous partial derivatives. The
curl is then defined by
There is an important connection between the Jacobian derivative of a
vector field and the curl of that vector field. Namely, the
antisymmetric part of the Jacobian derivative has components of the
curl for entries.
This is important because it gives us the following formula. For any
two vectors b and c in R3,
c T(F-(F')T)b = curl
F · b × c .
This is useful in proving Stokes'
theorem, which we now state.
Stokes' Theorem Let S be an orientable surface bounded by a simple closed positively oriented curve C. If F is a continuously differentiable vector-valued function defined in a region containing S, then
Divergence Theorem Let V be region in 3D bounded by a closed, piecewise smooth, orientable surface S; let the outward-drawn normal be n. Then,
Using the Divergence Theorem, we can write this in terms of a volume integral.
Since there are no sources or sinks in S, any mass entering or leaving the volume enclosed by S must pass through S. Thus, the rate at which mass of fluid inside of S changes in time is the negative of the flux.
Interchanging the time derivative and the triple integral and bringing everything under the same integral, we have that
This equation holds for all regions in which the fluid is source free. It follows that the integrand must be 0, otherwise we could pick a small cube restricted to a region in which it was positive (or negative). The whole integral would then be positive (or negative). Hence, we arrive at the following equation, known as the equation of continuity: