Derivatives LAB question

Hello guys !

So in the derivatives lab, we experiment with calculating the derivative of w^2 which we know is 2*w. Means for w to be 3 the derivative is 6

I tried to reduce the epsilon to an very small nimber and the result is not as satisfying. It is stated that the small the epsilon the better the result… can you please tell me what is wrong here ?

Really interesting question! I tried experimenting with this using a slightly more general implementation so that we can watch what happens as epsilon gets smaller and here’s what I get:

epsilon 0.0001 f_prime_approx 6.000100000012054 approx_error 0.00010000001205412445
epsilon 1e-05 f_prime_approx 6.000009999951316 approx_error 9.999951315897704e-06
epsilon 1.0000000000000002e-06 f_prime_approx 6.0000010009275675 approx_error 1.000927567496035e-06
epsilon 1.0000000000000002e-07 f_prime_approx 6.000000087880152 approx_error 8.788015204430621e-08
epsilon 1.0000000000000004e-08 f_prime_approx 5.999999963535172 approx_error 3.646482760188974e-08
epsilon 1.0000000000000005e-09 f_prime_approx 6.000000496442223 approx_error 4.96442223330007e-07
epsilon 1.0000000000000006e-10 f_prime_approx 6.000000496442222 approx_error 4.964422224418286e-07
epsilon 1.0000000000000006e-11 f_prime_approx 6.000000496442222 approx_error 4.964422224418286e-07
epsilon 1.0000000000000006e-12 f_prime_approx 6.0005334034940425 approx_error 0.000533403494042517
epsilon 1.0000000000000007e-13 f_prime_approx 6.004086117172843 approx_error 0.004086117172843018
epsilon 1.0000000000000008e-14 f_prime_approx 6.217248937900871 approx_error 0.2172489379008713
epsilon 1.0000000000000009e-15 f_prime_approx 5.329070518200747 approx_error 0.670929481799253

In particular I get the same value you got for \epsilon = 10^{-14}, but you can see that the general trend is that things get better until we get to 10^{-9} and then they stabilize for a while and then go off the rails at 10^{-12} and get worse and worse. We’d have to do more work to really prove this, but my guess is that what is happening here is that we are hitting the limits of accuracy in 64 bit floating point. The resolution in the mantissa is between 10^{-15} and 10^{-16} (see the IEEE 754 article on Wikipedia for more details) and so what would work in the full infinite resolution of \mathbb{R} just can’t be achieved in a finite representation. In other words the mathematical concept of “limits” just doesn’t work beyond the resolution of whatever the representation is that you are using.

The other interesting experiment here would be to redo this using “two-sided” differences. Those are in theory more stable than the “one-side” technique we are using here, but we’ll still hit a wall at some point before we get to 10^{-16}.

1 Like

Just out of curiosity, I added another loop to do it using the “two-sided” difference:

f'(x) \approx \displaystyle \frac {f(x + \epsilon) - f(x - \epsilon)}{2 \epsilon}

and here’s what that gives:

Two Sided Differences
epsilon 0.0001 f_prime_approx 6.000000000012662 approx_error 1.2661871551244985e-11
epsilon 1e-05 f_prime_approx 6.000000000039306 approx_error 3.930633596382904e-11
epsilon 1.0000000000000002e-06 f_prime_approx 6.000000000838667 approx_error 8.386669136939418e-10
epsilon 1.0000000000000002e-07 f_prime_approx 5.999999990180526 approx_error 9.819474122707561e-09
epsilon 1.0000000000000004e-08 f_prime_approx 5.999999963535172 approx_error 3.646482760188974e-08
epsilon 1.0000000000000005e-09 f_prime_approx 6.000000496442223 approx_error 4.96442223330007e-07
epsilon 1.0000000000000006e-10 f_prime_approx 6.000000496442222 approx_error 4.964422224418286e-07
epsilon 1.0000000000000006e-11 f_prime_approx 6.000000496442222 approx_error 4.964422224418286e-07
epsilon 1.0000000000000006e-12 f_prime_approx 6.0005334034940425 approx_error 0.000533403494042517
epsilon 1.0000000000000007e-13 f_prime_approx 5.995204332975842 approx_error 0.004795667024158234
epsilon 1.0000000000000008e-14 f_prime_approx 6.12843109593086 approx_error 0.12843109593085966
epsilon 1.0000000000000009e-15 f_prime_approx 5.329070518200747 approx_error 0.670929481799253

The behavior is pretty interesting: it’s much better at the larger values of \epsilon hitting its best approximation at the largest value of 10^{-4} and it just gets worse from there. It does the same stabilization around 10^{-9} and then going off the rails in almost exactly the same way as the one-sided version.

1 Like

This is really interesting indeed ! Thanks for the explanation. I guess you re right ! It has something to do with the 64 bit float :pray: