MAS212 Class Test 2 (2016)

Aim:

  • To reinforce what you have learnt in lectures and lab classes this semester.

How it works:

  • Complete the notebook below, as follows. Click on a question, then from the menu select Insert -> Insert Cell Below. From the dropdown box (next to the "stop" symbol) choose whether you wish to insert 'Code' or 'Markdown' (i.e. text with HTML and LaTeX capabilities). Save regularly!
  • Press Shift-Enter to execute the code in a cell.
  • Submit via http://somas-uploads.shef.ac.uk/mas212 by 5pm on Mon 12th Dec. If you are off-campus, you will need to use a VPN to submit.
  • Each question will be marked out of 5.
  • This is an open-book test, which means you may consult books, notes and internet resources. Do not discuss or share your test with anyone. Copy-and-pasting is not permitted. Please give answers in your own words.
  • Class test 2 is intended to be more challenging than Class Test 1. Please don't spend an excessive amount of time on this, as it is only worth $5\%$ of the module mark.
  • All submissions will be automatically checked for excessive similarities. Where there is sufficient circumstantial evidence of excessive similarities, a mark of zero will be awarded for this test.

Key Resources:

  • Lecture materials and links on course web page: http://sam-dolan.staff.shef.ac.uk/mas212/
In [1]:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%matplotlib inline


A. Definitions and Concepts

1. Convergence. Define the spectral radius $\rho(R)$, where $R$ is a square matrix.
When does the iterative scheme $x$ $\rightarrow$ $Rx + c$ converge? (Here $x$ and $c$ are column vectors.)

The spectral radius $\rho(R)$ of an $n \times n$ matrix $R$ is given by the maximum magnitude of its eigenvalues $\lambda_i$ : $$ \rho(R) = \max_{i=1 \ldots n} \left| \lambda_i \right| . $$ See Lecture 8.

The iterative scheme $x \rightarrow R x + c$ converges if the spectral radius is less than unity, $\rho(R) < 1$. </font>

2. Conditioning. (a) Define the row sum norm $||A||$ and the condition number $C$ of an $n\times n$ square matrix $A$.
(b) Consider the linear system $A (x + \Delta x) = b + \Delta b$. Without proof, give an upper bound for $|| \Delta x ||$ in terms of row sum norms of $x$, $b$, $\Delta b$, $A$ and $A^{-1}$.
(c) Find the condition number of the matrix below. Is $Ax = b$ well-conditioned or ill-conditioned, in this case? $$ A = \begin{bmatrix} 1 & 1/2 & 1/3 & 1/4 \\ 1/2 & 1/3 & 1/4 & 1/5 \\ 1/3 & 1/4 & 1/5 & 1/6 \\ 1/4 & 1/5 & 1/6 & 1/7 \end{bmatrix} $$

(a) The row sum norm of a matrix or vector $A$ is the maximum value of the sum of absolute values in each row, $$ \left\|A \right\| = \max_{1 \le i \le m} \sum_{j=1}^n \left| a_{ij} \right| . $$ The condition number $C$ is the product of the row sum norm of $A$ and the row sum norm of the inverse of $A$: $$ C = ||A|| \, ||A^{-1}||. $$

(b) In Lecture 8 we showed that $||\Delta x||$ satisfies the inequality $$ \frac{||\Delta x||}{||x||} \le \, C \, \frac{|| \Delta b||}{||b||}, $$ with $C$ as defined above.

(c) The easiest way to find the condition number is using np.linalg.cond() (see code below; any other valid method also accepted). The system $A x = b$ is ill-conditioned because $C = 28375 > 10^4$ in this case.

</font>

In [44]:
A = np.matrix([[1,1/2,1/3,1/4],[1/2,1/3,1/4,1/5],[1/3,1/4,1/5,1/6],[1/4,1/5,1/6,1/7]])
C = np.linalg.cond(A, p=1)   # optional argument 'p=1' needed to use row-sum norm.
print(C)
28375.0

3. Truncation error. Suppose we have a differential equation with an exact solution $\bar{x}(t)$ on the domain $t \in [0,1]$, and a numerical scheme which generates a numerical solution $x_k \equiv x(t_k)$, where $t_k = k h$, over the same domain $[0,1]$. Here $h = 1/n$ is the step size, $n$ is a positive integer, and $k \in [0, 1, \ldots, n]$.
The error at the $k$th step is $e_k \equiv x_k - \bar{x}(t_k)$.

(a) Define the local truncation error (LTE) and the global truncation error (GTE) of the numerical scheme.
(b) If the numerical scheme is third-order-accurate, how does the LTE and GTE scale with the step size $h$, as $h \rightarrow 0$?
(c) Briefly describe a practical method for checking that your implementation has the correct order of convergence.

(a) The local truncation error (LTE) is the error accumulated in taking one step of the numerical scheme, $e_{k+1} - e_k$. The global truncation error (GTE) is the error accumulated in solving the ODE over the whole domain, $e_n$ where $n = 1/h$ in this case.

(b) If the numerical scheme is third-order accurate then the GTE scales as $h^3$ and the LTE as $h^4$, as $h \rightarrow 0$.

(c) The ratio test is a practical method for testing the order of convergence. To apply this test, we run with step sizes $h$, $h/2$ and $h/4$ and take a ratio of the differences, as described in Lab Class 5. In this way, we get a quantity $r$ which should tend towards $2^n$ as $h \rightarrow 0$, where $n$ is the order of convergence.

</font>

4. Stability. An MAS110 student decides to solve the differential equation below with the forward Euler method, and a step size of $h=0.01$. $$ \frac{dx}{dt} = 494 x + 994 y, \quad \quad \frac{dy}{dt} = -497 x - 997 y, \quad \quad x(0)=2, \; y(0)=0. $$

(a) What problem would the student encounter?
(b) Approximately, what is the maximum step size $h_{max}$ needed to avoid this problem with the forward Euler method?
(c) Suggest an alternative method to avoid this problem.

(a) The student would find a spurious numerical instability, that causes their numerical solution to diverge exponentially.

(b) The exact solution to this equation is a linear sum of terms that decay as $e^{-3t}$ and $e^{-500t}$. The instability is triggered by the rapid decay of the latter. For stability, the step size $h$ would need to be smaller than approximately $1/500$

(c) All the explicit methods we have met in this course are vulnerable to this numerical instability. However, an implicit method such as the backward Euler method, would not exhibit an instability in this case. </font>


B. Coding and implementation

5. Solving an ODE system numerically.
Use odeint() to find a numerical solution to the ODE system \begin{eqnarray} \frac{dx}{dt} &=& -\left( x^3 + x y^2 + y - x \right) \nonumber \ \frac{dy}{dt} &=& -\left( y^3 + x^2 y - y - x \right) \nonumber \end{eqnarray}

on the domain $t \in [0,20]$ with initial condition $x(0) = 0.01$, $y(0) = 0$.

Plot a phase portrait and briefly describe what you observe (in your own words).

In [9]:
def dXdt(X, t):
    x, y = X
    dxdt = -(x**3 + x*y**2 + y - x)
    dydt = -(y**3 + x**2*y - y - x)
    return [dxdt, dydt]

tmax = 20; tpts = 200;
ts = np.linspace(0, tmax, tpts);
ic = (0.01, 0);
Xs = odeint(dXdt, ic, ts);
plt.plot(Xs[:,0], Xs[:,1], 'r-');
plt.axes().set_aspect('equal');
plt.xlabel('x'); plt.ylabel('y');

The trajectory in the phase portrait spirals away from an unstable equilibrium at $x=y=0$, and converges upon a limit cycle at $x^2 + y^2 = 1$.

6. Animation. (Lecture 9. You may adapt the code in sinwave.py, as in Lab Class 9, Question 2).
Use matplotlib.animation.FuncAnimation() to create an animation with 400 frames, in which the $i$th frame shows a plot of $y(x)$ over the domain $x \in [-5,5]$ and range $y \in [-0.4,1]$ where $$ y(x) = \frac{\sin(\alpha_i x)}{\alpha_i x}, \quad \quad \alpha_i = 4 \sin\left( \frac{\pi i}{400} \right) + 0.1 . $$

In [21]:
%matplotlib notebook
from matplotlib import animation
fig = plt.figure()
ax = plt.axes(xlim=(-5,5), ylim=(-0.4, 1))
line, = ax.plot([], [], lw=2)
def init():  # Initialize with a blank plot
    line.set_data([], [])
    return line,
def animate(i):  # Plot a moving sine wave
    x = np.linspace(-5, 5, 1000)
    alp = 4*np.sin(np.pi*i / 400) + 0.1
    y = np.sin(alp*x)/(alp*x)
    line.set_data(x, y)
    return line,
anim = animation.FuncAnimation(fig, animate, init_func=init,
                               frames=400, interval=20, blit=False)
plt.show()
%matplotlib inline

7. The linear two-step method. (See Lab Class 6, Question 2(a)).
Write code to obtain a numerical solution of the differential equation $$ \frac{dx}{dt} = -x + \left\{ 1+ \tanh(t-20) \right\} \sin t, \quad \quad x(t < 0) = 0, $$ using the linear two-step method, with a step size of $h=0.02$ on the domain $t \in [0, 60]$.
Plot your solution.
Print the numerical value of $x(60)$ to 12 decimal places.

In [56]:
h = 0.02; tmax = 60;
n = int(tmax/h);
ts = np.linspace(0,tmax,n+1);
def dxdt(x,t):
    return -x + (1 + np.tanh(t-20))*np.sin(t)

xs = np.zeros(n+1)
for k in range(1, n):
    xs[k+1] = xs[k] + h*(3/2*dxdt(xs[k],ts[k]) - 1/2*dxdt(xs[k-1],ts[k-1]))

plt.plot(ts, xs, 'r-'); plt.xlabel('t'); plt.xlabel('x(t)')
print("Final value: x(%.1f) = %.12f" % (ts[-1],xs[-1]))
Final value: x(60.0) = 0.647553457029

8. An iterative matrix method. (Lecture 8).
Solve the 3D linear system $(A + \Delta A) x = b$ for $x$ by using a suitable iterative method, with $A$, $\Delta A$, $b$ as given below.

Unfortunately, there was a typo in slide 22 of Lecture 8 that misled some students (a missing $\Delta$). This has been taken into account in the mark scheme.

In [15]:
np.random.seed(1)
A = np.diag([2,3,-1])
dA = np.matrix(0.2*(2*np.random.rand(9)-1).reshape(3,3))  # This is "Delta A"
b = np.matrix("1 ; 2 ; 3")
In [17]:
# Here I attempt to use the iterative improvement method of Lecture 8.
B0 = np.diag([1/2,1/3,-1])
R = -np.dot(B0, dA)
c =  np.dot(B0, b)
tol = 1e-12
maxits = 30
x = np.matrix("1; 1; 1")  # initial guess
for i in range(maxits):
    print(i, x.transpose())
    xnew = np.dot(R,x) + c
    if max(abs(x-xnew)) < tol:
        break
    x = xnew
if i < maxits:
    print("\nAfter %i iterations, the solution x converged to: \n(%.12f, %.12f, %.12f)" %(i, x[0],x[1],x[2]))
else:
    print("\nThe scheme did not converge.")
err = np.dot(A+dA, x) - b
print("\nError:  A x - b = \n(%e, %e, %e)" %(err[0],err[1],err[2]))
0 [[1 1 1]]
1 [[ 0.57250783  0.79447639 -3.22856463]]
2 [[ 0.15170998  0.54368654 -2.98760958]]
3 [[ 0.17986761  0.53388121 -2.92925821]]
4 [[ 0.18660077  0.53733318 -2.93459565]]
5 [[ 0.18602678  0.53738311 -2.93543348]]
6 [[ 0.18593129  0.53732479 -2.93532994]]
7 [[ 0.18594263  0.53732515 -2.93531863]]
8 [[ 0.18594393  0.53732608 -2.93532054]]
9 [[ 0.18594372  0.53732606 -2.93532068]]
10 [[ 0.18594371  0.53732604 -2.93532065]]
11 [[ 0.18594371  0.53732604 -2.93532065]]
12 [[ 0.18594371  0.53732604 -2.93532065]]
13 [[ 0.18594371  0.53732604 -2.93532065]]
14 [[ 0.18594371  0.53732604 -2.93532065]]
15 [[ 0.18594371  0.53732604 -2.93532065]]

After 15 iterations, the solution x converged to: 
(0.185943710112, 0.537326044625, -2.935320647824)

Error:  A x - b = 
(-7.882583e-15, -1.330047e-13, -1.558753e-13)

9. Linear regression. (Lecture 7).
For the data set below, find the line of best fit $y = mx + c$, using linear regression.
Calculate $m$ and $c$, with any valid method, and print to screen.
Plot the data set along with the line of best fit.

In [33]:
xs = np.array([ 0.01777072, -0.01156888,  0.21384365,  0.37845755,  0.43702859,
        0.58597224,  0.66804486,  0.77566893,  0.89928527,  1.01621027])
ys = np.array([-0.25776996, -0.3336627 , -0.24307303,  0.64467634,  0.89869925,
        0.87292456,  0.6649084 ,  1.2645241 ,  1.88451126,  1.69648964])
# numpy can calculate the variance and covariance. 
# Note that cov() returns a matrix, with variances on the diagonal, and the covariances on the off-diagonal.
# We need to set bias=True to divide by the number of points N, rather than by N-1.
cov = np.cov(xs, ys, bias=True)
varx = cov[0,0]; vary = cov[1,1]; covar = cov[0,1];
meanx = np.mean(xs); meany = np.mean(ys);
m = covar / varx;
c = meany - m*meanx;
xline = np.linspace(-0.2, 1.2, 2)
plt.plot(xs, ys, 'bo')
plt.plot(xline, m*xline + c, 'r-')
plt.xlabel('x'); plt.ylabel('y');
print("m = %.6f, c = %.6f" % (m,c))
m = 2.125525, c = -0.349440

10. Discrete Fourier Transform. (Lecture 10).
The array below gives the Fourier coefficients $\tilde{X}_k$ which were calculated from a data set $x_j$ using the function np.fft.fft(). Apply an inverse Discrete Fourier Transform to $\tilde{X}_k$ to compute $x_j$. Plot the data set $x_j$, which is linearly-spaced across the domain $t \in [0,2]$.

In [70]:
Xtilde = np.array([  9.02056208e-17 +0.00000000e+00j,
        -9.94869288e-02 +6.28135748e-01j,
         2.01516962e-01 -6.20205437e-01j,
        -1.45857393e-01 +2.86261252e-01j,
         5.29694314e-02 -7.29061677e-02j,
        -1.07420795e-02 +1.07420795e-02j,
         1.26523882e-03 -9.19249812e-04j,
        -9.89294418e-05 +5.04070683e-05j,
        -5.40655653e-06 +1.75669671e-06j,
        -9.91065797e-06 +1.56969402e-06j,
        -9.96735094e-06 +1.66533454e-16j,
        -9.91065797e-06 -1.56969402e-06j,
        -5.40655653e-06 -1.75669671e-06j,
        -9.89294418e-05 -5.04070683e-05j,
         1.26523882e-03 +9.19249812e-04j,
        -1.07420795e-02 -1.07420795e-02j,
         5.29694314e-02 +7.29061677e-02j,
        -1.45857393e-01 -2.86261252e-01j,
         2.01516962e-01 +6.20205437e-01j,  
        -9.94869288e-02 -6.28135748e-01j])

xs = np.fft.ifft(Xtilde)
ts = np.linspace(0,2,len(xs))
plt.plot(ts, xs.real, 'r-o')
plt.xlabel('t'); plt.ylabel('x')
Out[70]:
<matplotlib.text.Text at 0x113863278>
In [39]:
 
Out[39]:
3.0
In [ ]: