Math 660 - Fall 1999

Solutions to Selected Problems

29 August

Problem 1.1.1

c=I(:,1)
d=0
for k=1:n 
    for j=1:n
        d=(A(:,j) - x(k)I(:,j))*c(j) + d
    end
    c=d
    d=o
end 

2 September

Problem 1.2.5.

Recall that A = AH, and that A = B + iC, where B and C are real. BT = B and CT = -C. We store B and C in the array Aherm. The lower triagular part of Aherm and the diagonal coincide with B. The upper triangular part coincides with C. We want to compute the real and imaginary parts of z = Ax, given Aherm and the real and imaginary parts of x. Let r = Re(x), s = Im(x), u = Re(z), and v = Im(z). Multiplying Ax, we obtain the following:

u = Br - Cs and v = Bs + Cr.

We can compute these vectors using the information stored in the matrix Aherm :

for k=1:n

    u(k) = A.herm(k,1:k)*r(1:k) + r((k+1):n)^T *A.herm((k+1):n,k)
           + s(1:k)^T*A.herm(1:k,k) - A.herm(k,k:n)*s(k,k:n)

    v(k) = A.herm((k,1:k)*s(1:k) + s((k+1):n)^T *A.herm(((k+1):n,k)
           - r(1:k)^T *A.herm(1:k,k) + A.herm(k,k:n)*r(k,k:n)
end

Problem 1.2.10.

The structure of a typical matrix is (n=4)

p1 q1 p1 q2 p1 q3 p1 q4
p1 q2 p2 q2 p2 q3 p2 q4
p1 q3 p2 q3 p3 q3 p3 q4
p1 q4 p2 q4 p3 q4 p4 q4

Here is an algorithm for computing the product of Ax, where A has the form given.

a=0
b=q*x (2*n flops)
for j=1:n,
    a = a + p(j)*x(j) (2 flops)
    b = b - q(j)*x(j) (2 flops)
    y(j) = a*q(j) + b*p(j) (3 flops)
end

There are 7 flops per iteration, so the loop takes 7*n flops. The inialization requires 2*n flops. Overall, the process takes 9*n flops.

23 September

Problem 2.5.1.

The matrix I - S will be singular if and only if there is a vector x in Rn such that (I - S) x = 0. Multiply on the left by xT to get the equation xTx - xTS x = 0. Note that

xTS x =( ST xT)T x = -(S xT)T x = - xT S x ==> xTS x = 0.

Hence, xTx = xTS x = 0, and x = 0. Thus, I - S is nonsingular. Similarly, I + S is nonsingular. hence,

( (I - S)-1(I + S) )T = (I - S) (I + S)-1 = (I + S)-1 (I - S),

which is the inverse of the matrix (I - S)-1(I + S).


30 September

Formulas 2.5.1 and 2.5.2

We first deal with 2.5.1. The square of Frobenius norm ||A||F2 = Sumj, k |A(j, k)|2. We make the following observations.

  1. ||AT||F2 = ||A||F2, because Sumj,k |AT(j, k)|2 = Sumj, k |A(k, j)|2 = Sumk, j |A(k, j)|2.
  2. ||A||F2 = Sumk ||A(:, k)||22 (Sum over j gives the 2-norm of a column.)
  3. For any matrix B compatible with A, B*A=[B*A(:,1) ... B*A(:,n)], so ||B*A||F2 = Sumk ||B*A(:, k)||22

For Q orthogonal,
||Q*A||F2 = Sumk ||Q*A(:, k)||22 = Sumk ||A(:, k)||22 = ||A||F2

For Z orthogonal,
|A*Z||F2 = ||(A*Z)T||F2 = ||ZT*AT||F2 = ||AT||F2 = ||A||F2

Putting the two together gives the formula 2.5.1. The derivation for 2.5.2 is less difficult, and we will not give it here.