MAS212 Class Test 2 (2017)

Aim:

  • To reinforce what you have learnt in lectures and lab classes this semester.

How it works:

  • Complete the notebook below, as follows. Click on a question, then from the menu select Insert -> Insert Cell Below. From the dropdown box (next to the "stop" symbol) choose whether you wish to insert 'Code' or 'Markdown' (i.e. text with HTML and LaTeX capabilities). Save regularly!
  • Press Shift-Enter to execute the code in a cell.
  • Submit via http://somas-uploads.shef.ac.uk/mas212 by midnight on Sunday 10th Dec. If you are off-campus, you will need to use a VPN to submit.
  • This is an open-book test, which means you may consult books, notes and internet resources. Do not discuss or share your test with anyone. Copy-and-pasting is not permitted. Please give answers in your own words.
  • This test is worth $5\%$ of the module mark. Please don't spend an excessive amount of time on it.
  • All submissions will be automatically checked for excessive similarities. Where there is sufficient circumstantial evidence of excessive similarities, a mark of zero can be awarded for this test.

Key Resources:

  • Lecture materials and links on course web page: http://sam-dolan.staff.shef.ac.uk/mas212/
In [1]:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%matplotlib inline
plt.rcParams.update({'font.size': 14})


A. Definitions and Concepts

1. Convergence (Lecture 9).
Define the spectral radius $\rho(R)$, where $R$ is a square matrix.

A: The spectral radius $\rho(R)$ is the maximum value in the set of $\{|\lambda_i|\}$, where $\lambda_i$ are the eigenvalues of $R$.

2. Conditioning (Lecture 9).
(i) Define the row sum norm $||A||$ and the condition number $C$ of an $n\times n$ square matrix $A$.
(ii) Find the condition number of the matrix below. Is $Ax = b$ well-conditioned or ill-conditioned, in this case? $$ A = \begin{bmatrix} 1 & 1/2 & 1/3 \\ 1/2 & 1/3 & 1/4 \\ 1/3 & 1/4 & 1/5 \end{bmatrix} $$ (iii) In your own words, briefly describe the key lesson of Wilkinson's polynomial (Lec. 9).

A: (i) The row sum norm is the maximum value of the sum of absolute values of the elements in each row. For an $n \times n$ matrix $A$, $$ ||A|| = \text{max}_{1\le i\le n} \sum_{j=1}^n |A_{ij}|. $$ The condition number of $A$ is the product of the RSN of $A$ and the RSN of the inverse of $A$: $$ C = ||A|| \, ||A^{-1}|| $$

(ii) $C = (11/6) \times (408) = 748$. The matrix is well-conditioned iff $C \lesssim 10^n$. As $748$ is less than $1000$, this matrix is well-conditioned.

(iii) The lesson of Wilkinson's polynomial is that very small changes in the coefficients of higher-order polynomials can lead to drastic changes in the values of some of the roots (and very small changes in other roots).

3. Stability (Lectures 5 and 6). An MAS110 student decides to solve the differential equation below with the forward Euler method, and a step size of $h=0.1$. $$ \frac{dx}{dt} = -30 x + 1, \quad \quad x(0) = 2. $$

(a) What problem would the student encounter?
(b) Approximately, what is the maximum step size $h_{max}$ needed to avoid this problem with the forward Euler method?

A: (a) Using the forward Euler method with a step size of $h = 0.1$ the student would likely encounter a numerical instability: a spurious exponential growth in the numerical solution that swamps the true solution.
(b) From the lectures, we expect a numerical instability when $h \ge 2 / |f'|$, where $f(x) = -30+1$ in this case. Thus, the step size chosen should be less than $h_{max} = 2/30 = 1/15$ to avoid this problem.


B. Coding and implementation

4. Gauss-Jordan elimination.
Apply row operations (as in Lab Class 9) to find the inverse matrix $A^{-1}$ and the solution $x$ in the linear system $A x = b$ where $$ A = \begin{pmatrix} 1 & 2 & -3 \\ 0 & -1 & 3 \\ 2 & 1 & 2 \end{pmatrix}, \quad \quad b = \begin{pmatrix} -1 \\ 4 \\ 9 \end{pmatrix} $$ (You may use and adapt the code in gauss-jordan1.py only from Lab Class 9).

In [2]:
import numpy as np
np.set_printoptions(threshold=np.inf) # print the whole augmented array

def swaprows(M,i,j):
    M[i-1,:], M[j-1,:] = M[j-1,:].copy(), M[i-1,:].copy()

def augmented(A,b):
    n = A.shape[0]
    aug = np.zeros((n,2*n+1))
    aug[:,:n] = A
    aug[:,n:2*n] = np.identity(n)
    aug[:,2*n] = b.transpose()
    return aug

A = np.matrix("1,2,-3;0,-1,3;2,1,2", dtype=np.float64)
b = np.matrix("-1;4;9", dtype=np.float64)
M = augmented(A, b)
print(M,"\n")  # Show the augmented matrix

swaprows(M,1,3) # Swap the first and second rows
print(M,"\n")

M[2,:] = M[2,:] - M[0,:]/2
M[0,:] = M[0,:]/2
print(M,"\n")

swaprows(M,2,3)
M[2,:] = M[2,:] - M[1,:]*M[2,1]/M[1,1]
M[2,:] = M[2,:]*3
M[1,:] = M[1,:] / M[1,1]
print(M,"\n")

M[1,:] = M[1,:] - M[2,:]*M[1,2]/M[2,2]
print(M,"\n")

M[0,:] = M[0,:] - M[2,:]*M[0,2]/M[2,2]
M[0,:] = M[0,:] - M[1,:]*M[0,1]/M[1,1]
print(M,"\n")

print("The matrix inverse is\n", M[:,3:6], "\nand the solution is\n x = ", M[:,-1])
[[ 1.  2. -3.  1.  0.  0. -1.]
 [ 0. -1.  3.  0.  1.  0.  4.]
 [ 2.  1.  2.  0.  0.  1.  9.]] 

[[ 2.  1.  2.  0.  0.  1.  9.]
 [ 0. -1.  3.  0.  1.  0.  4.]
 [ 1.  2. -3.  1.  0.  0. -1.]] 

[[ 1.   0.5  1.   0.   0.   0.5  4.5]
 [ 0.  -1.   3.   0.   1.   0.   4. ]
 [ 0.   1.5 -4.   1.   0.  -0.5 -5.5]] 

[[ 1.          0.5         1.          0.          0.          0.5         4.5       ]
 [ 0.          1.         -2.66666667  0.66666667  0.         -0.33333333
  -3.66666667]
 [ 0.          0.          1.          2.          3.         -1.          1.        ]] 

[[ 1.   0.5  1.   0.   0.   0.5  4.5]
 [ 0.   1.   0.   6.   8.  -3.  -1. ]
 [ 0.   0.   1.   2.   3.  -1.   1. ]] 

[[ 1.  0.  0. -5. -7.  3.  4.]
 [ 0.  1.  0.  6.  8. -3. -1.]
 [ 0.  0.  1.  2.  3. -1.  1.]] 

The matrix inverse is
 [[-5. -7.  3.]
 [ 6.  8. -3.]
 [ 2.  3. -1.]] 
and the solution is
 x =  [ 4. -1.  1.]

5. Solving an ODE system numerically. (Lecture 6)
Use the backward Euler method to solve the ODE system in Question 3 with a step size $h=0.01$. Plot the solution $x(t)$ over the domain $t \in [0, 1]$.

A: The backward Euler method applied to the ODE in Q3 gives $$x_{n+1} = x_{n} + h (-30 x_{n+1} + 1) .$$ By taking the $x_{n+1}$ to the right-hand side, I can turn this into an explicit equation: $$ x_{n+1} = (1 + 30h)^{-1} (x_n + h). $$

In [3]:
h = 0.01; n = int(1/h);
xs = np.zeros(n+1); ts = np.zeros(n+1);
xs[0] = 2  # The initial condition
for k in range(n):
    xs[k+1] = (xs[k] + h)/(1 + 30*h)
    ts[k+1] = ts[k]+h
plt.xlabel('t'); plt.ylabel('x(t)');
plt.plot(ts,xs,'-');

6. The logistic map. (Lec 3 / Lab Class 3).
The logistic map is $$ x_{n+1} = r x_n (1 - x_n) $$ where $r$ is a real parameter.
(i) Compute sequences $[x_0, x_1, \ldots x_{40}]$ with $x_0 = 1/2$, for three parameter choices:
$r = 2.5$, $r = 3.5$ and $r = 3.8$.
(ii) Plot the three sequences $(n,x_n)$ on the same graph. (Your plot should look something like this.) Briefly comment on what you see.

In [4]:
n = 40
ns = np.arange(n+1)
rs = [2.5,3.5,3.8]
for r in rs:
    xs = np.zeros(n+1)
    xs[0] = 1/2
    for k in range(n):
        xs[k+1] = r*xs[k]*(1-xs[k])
    plt.plot(ns, xs, '.', label="r = %.1f" % r)
plt.xlabel('$n$')
plt.ylabel('$x_n$')
plt.legend(loc=1);

7. Linear regression.
For the data set below, find the line of best fit $y = mx + c$, using linear regression.
(i) Calculate $m$ and $c$ using the formulae in Lecture 8 and print to screen to six decimal places.
(ii) Plot the data set with the line of best fit.

In [5]:
xs = np.array([ 0.38209254, -0.63095558, -0.27098983, -1.77894635,  0.89872936,
        1.72859985,  1.05801651,  1.8002796 , -0.48903099, -0.86228466,
        0.81095674])
ys = np.array([-0.48265341, -2.13554737, -0.60375244, -1.86415841,  1.27816274,
        2.15335151,  0.7339228 ,  1.87013722,  0.33664746, -0.47411395,
        0.81644587])

# Part (i)
covmat = np.cov(xs, ys, bias=True)
varx = covmat[0,0]
vary = covmat[1,1]
cov = covmat[0,1]
m = cov / varx
c = np.mean(ys) - m*np.mean(xs)
print("The gradient and intercept are m = %.3f and c = %.3f." % (m,c))

# Part (ii)
xline = np.linspace(-2,2,2)
yline = m*xline + c
plt.plot(xs,ys, '.', xline, yline, '-');
The gradient and intercept are m = 1.095 and c = -0.115.

8. Animation. (Lecture 7).
Use matplotlib.animation.FuncAnimation() to create an animation with 200 frames, showing a standing wave on a string, described by: $$ y(x,t) = \sin(x) \sin(t) + 0.2 \sin(2 x) \cos(2 t) + 0.1 \sin(3 x) \sin(3 t) $$ Each frame of the animation should show $y$ vs $x$ on the domain $x \in [0, \pi]$. The frames should be linearly-spaced across $t \in [0, 4 \pi]$.
(You may adapt the code in sinwave.py, as in Lab Class 7, Q2).)

In [6]:
%matplotlib notebook
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
fig = plt.figure()
xmax = np.pi
ax = plt.axes(xlim=(0, xmax), ylim=(-1.3, 1.3))
line, = ax.plot([], [], lw=2)
def init():  # Initialize with a blank plot
    line.set_data([], [])
    return line,
nframes = 200
tmax = 4*np.pi
npts= 500
def animate(i):  # Plot a moving sine wave
    t = (i / nframes) * tmax
    x = np.linspace(0, xmax, npts)
    y = np.sin(x)*np.sin(t) + 0.2*np.sin(2*x)*np.cos(2*t)+0.1*np.sin(3*x)*np.sin(3*t)
    line.set_data(x, y)
    return line,
anim = animation.FuncAnimation(fig, animate, init_func=init,
                               frames=nframes, interval=20, blit=True)