Jupyter notebooks and other materials developed for the Columbia course APMA 4300
This work is licensed under a Creative Commons Attribution 4.0 International License.
Jupyter notebooks and other materials developed for the Columbia course APMA 4300
License: Creative Commons Attribution 4.0 International
Jupyter notebooks and other materials developed for the Columbia course APMA 4300
This work is licensed under a Creative Commons Attribution 4.0 International License.
I'm pretty sure there is a typo on the last line of this lecture. Instead of:
I think it should be:
Want to take content and make it more modularized and re-orderable for ease of re-use.
Area under "Plotting Stability Regions" and "Absolute Stability of the Forward Euler Method" mentions 'complex plain' and should be 'plane'.
As described on computational science.
For the floating point arithmetic Ax = b example, the solver I used generated x = [1; 16], and not x = [-0.5; 16].
I saw you revised this part before, but it did not look correct.
If
If
Should not the sign be reversed?
The code for Example 2 (last example) should be
numpy.all(error < 8.0 * numpy.finfo(float).eps):
not
numpy.all(error < 100.0 * numpy.finfo(float).eps):
The expression given by
should be
Should be du / dt = lambda u in the second line under the Example: Forward Euler on a Linear Problem.
This is throughout at least 10_LA_QR. Probably want 'complement' (from set theory) and not 'compliment' which is an expression of approval.
Under 'Newton-Cotes Quadrature,' the lecture contains the phrase:
"evaluate f(x) at these points and exactly integrate the interpolating polynomial exactly."
was this intended?
Typo 1.)
"Smallest number that can be represented is the underflow:
Should be "Smallest number that can be represented is the underflow:
Typo 2.)
"Smallest number that can be represented is the underflow:
Should be "Smallest number that can be represented is the underflow:
When describing the adam-bashworth method, there is a minor formatting error in the list describing the steps.
At 5 Root Finding and Optimization, Asymptotic Convergence of Newton's Method
... Let
$g(x) = x - \frac{f(x)}{f'(x)}$
, then
...
What about$g'(x^*)$
though:
$$g'(x) = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x)}{f''(x)}$$
which simplifies when evaluated at$x = x^*$
to
$$g'(x^*) = \frac{f(x^*)}{f''(x^*)} = 0$$
...
Since the Quotient Rule, $$\left ( \frac{u}{v} \right )' = \frac{u' v - v' u}{v^2}$$
,
I think it should be $$g'(x) = 1 - \frac{f'(x) f'(x) - f(x) f''(x)}{f'^2(x)} = 1 - 1 + \frac{f(x) f''(x)}{f'^2(x)}$$
.
Nonetheless, $$g'(x^*) = \frac{f(x^*) f''(x^*)}{f'^2(x^*)} = 0$$
, $$g''(x^*) = \frac{f''(x^*)}{f'(x^*)}$$
, so it won't effect the following result.
I think there is a typo in the lecture, in the "Truncation Error for Multi-Step Methods" section.
In the last expression I think the general form for the q-ith term is miss-written. A summation seems missing, and I also think that the 1/q! term should not be multiplying both the \alpha and \beta terms (it should just be multiplying the \alpha term). For example:
\Delta t^{q - 1} \left (\frac{1}{q!} \left(j^q \alpha_j - \frac{1}{(q-1)!} j^{q-1} \beta_j \right) \right) u^{(q)}(t_n)
should be:
\Delta t^{q - 1} \left( \sum^r_{j=0} \left (\frac{1}{q!} j^q \alpha_j - \frac{1}{(q-1)!} j^{q-1} \beta_j \right) \right) u^{(q)}(t_n)
Hi Kyle,
I found that in your notebook '~' is used for empty spaces, however, it won't get rendered correctly in github (right above the second cell block):
https://github.com/mandli/intro-numerical-methods/blob/master/14_LA_iterative.ipynb
It is rendered correctly on nbviewer however:
http://nbviewer.jupyter.org/github/mandli/intro-numerical-methods/blob/master/14_LA_iterative.ipynb
The way to get around is to use \quad or \[empty space].
I also use jupyter notebook for self-study and came across your wonderful lecture notes, when you cited an issue I raised for nbviewer:
jupyter/nbviewer#590
Cheers,
Zhangyi
Plots under L-stability section have typos in labels:
"Comparison of error for backward euler" should have label for Backward Euler.
"Comparison of errors for trapezoidal rule" should have label for Trapezoidal Rule (not Forward Euler).
In "Example: 4-stage Runge-Kutta Method"
y_2 = u_4[n] + 0.5 * delta_t * f(t_n + 0.5 * delta_t, y_1)
y_3 = u_4[n] + 0.5 * delta_t * f(t_n + 0.5, y_2)
In the block containing
x = [0.2, None, None, 0.5]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
I believe there is an error. The golden ratio phi is defined as 1.61803, but if that is true, then we should be dividing by the golden ratio, not multiplying by it. The code happens to work because the interval chosen in the notes is small enough such that x[1] and x[2] happen to fall within the interval. Making the interval larger will prevent this from happening and the brackets will diverge away.
Something along these lines was mentioned in class, but I figured I'd raise the potential issue just to be thorough.
"Numerics compliment analytical methods" --> complement
...or go find a good one.
The last equation in the last block has an error. The backwards substitution method starts with i+1, not i-1. See
It doesn't make sense as written so hopefully people will realize it, but this could confuse people!
"In nomalized floating point systems" --> "normalized"
A small typo in line 28 of the code block under Adams-Moulton Methods.
axes.set_xlabel("u(t)")
should be axes.set_ylabel
Also detect same typos when plotting similar graphs in the file.
Typo for secant method comments:
"Not gauranteed to converge"
In 10_linear_algebra, "complement" is misspelled as "compliment"
"Interpolation Alogithms: Repeated parabolic interpolation"
In Complimentary Projectors, line that reads
I-2P-P
should be
I-2P+P
"Plotting the error as a function of
Typo or unfinished block within Taylor Series Methods?
Example (no math mode):
[ u^{(p)}(t_n) = f^{(p-1)}(t_n, u(t_n)) ]
Near the end of the global error for the forward euler example, there is a type in the line:
"In other words the global error is bounded by the original global erro(r) and"
In "Example: Vandermonde Matrix," I believe the y matrix should be y1, y2 ... ym.
c = a^2 / b → a / b = a^2 / [b (b - a^2 / b)]
a / b = a^2 / (b^2 - a^2)
(there's an extra factor of 1 / b^2 in the notes)
For Bracketing Algorithm - Basic Idea:
"If
If
Shouldn't this be:
"If
If
Under the shooting method, instead of
min_{v_2(0)} |pi/2 - v_2(2)|
should be
min_{v_2(0)} |pi/2 - v_1(2)|
covers chapters 00 - 04,
grades[-1]
, grades[-2]
help()
and with ?
, jupyter notebook autocompletion using tab, and asking them to explore each function of every data structurerange
function defined as count, does not print the valuesprint(numpy.linspace(-1, 1, 10))
%precision 3
dtype=complex
, is done in the following explanation%matplotlib notebook
made for Jupyter notebooks, plot becomes interactiveplt.colorbar()
In 5_root_finding_optimization.ipynb, Analysis of Fixed Point Iteration part, it writes:
Using a Taylor expansion we know
$$g(x^* + e_k) = g(x^) + g'(x^) e_k + \frac{g''(c) e_k^2}{2}$$
$$x^* + e_{k+1} = g(x^) + g'(x^) e_k + \frac{g''(c) e_k^2}{2}$$
Why it's
There seems to be an error in the graphs showing the difference between euler and the leap-frog methods. The leapfrog graph just shows one data point at 0 and a vertical, dashed black line to it.
In 7_Differentiation, Examples, Example 1: 1st order Forward and Backward Differences, Computing Order of Convergence part, there are two equations:
e(Δx) = Δx^n + b
log e(Δx) = n logb logΔx
However, I don't think log e(Δx) equals that.
If I change the e(Δx), equations seem to be correct then.
e(Δx) = b Δx^n
log e(Δx) = logb + n logΔx
And the following plot supports that.
In the 04_error.ipynb notebook the two categories of numerical errors are listed as truncation error and floating point error. I tend to think of truncation error as the error made in a single step of approximating an ODE. I would like to separate truncation error into two other different kinds of errors, discretization error (error associated with using a simpler function) and convergence error (errors that accumulate over multiple steps in an algorithm).
Would this be okay?
The images for 10_linear_algebra are not in the images folder
In Backward Substitution, Solving Ax=b, 15_LA_gaussian, it writes:
Backwards substitution requires us to move from the last row of
$U$ and move upwards. We can consider again the general $i$th row with
$$
U_{i,i} x_i + U_{i,i-1} x_{i-1} + \ldots + U_{i,m-1} x_{m-1} + U_{i,m} x_m = y_i
$$
noting that we are using the fact that the matrix$L$ has 1 on its diagonal. We can now solve for$y_i$ as
$$
x_i = \frac{1}{U_{i,i}} \left( y_i - ( U_{i,i-1} x_{i-1} + \ldots + U_{i,m-1} x_{m-1} + U_{i,m} x_m) \right )
$$
Both equations have index mistake (
Matrix
The second sentence of ones between two equations should be
We can now solve for
$x_i$ as ...
In the code for the "2-digit precision base 2 system" in the 04_error notebook, we see the system defined as follows:
axes.plot( (d1 + d2 * 0.1) * 2E, 0.0, 'r+', markersize=20)
axes.plot(-(d1 + d2 * 0.1) * 2E, 0.0, 'r+', markersize=20)
Shouldn't the 0.1 be 0.5, since this is binary and not decimal (or maybe I am missing something)?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.