Derivatives LAB question

Really interesting question! I tried experimenting with this using a slightly more general implementation so that we can watch what happens as epsilon gets smaller and here’s what I get:

epsilon 0.0001 f_prime_approx 6.000100000012054 approx_error 0.00010000001205412445
epsilon 1e-05 f_prime_approx 6.000009999951316 approx_error 9.999951315897704e-06
epsilon 1.0000000000000002e-06 f_prime_approx 6.0000010009275675 approx_error 1.000927567496035e-06
epsilon 1.0000000000000002e-07 f_prime_approx 6.000000087880152 approx_error 8.788015204430621e-08
epsilon 1.0000000000000004e-08 f_prime_approx 5.999999963535172 approx_error 3.646482760188974e-08
epsilon 1.0000000000000005e-09 f_prime_approx 6.000000496442223 approx_error 4.96442223330007e-07
epsilon 1.0000000000000006e-10 f_prime_approx 6.000000496442222 approx_error 4.964422224418286e-07
epsilon 1.0000000000000006e-11 f_prime_approx 6.000000496442222 approx_error 4.964422224418286e-07
epsilon 1.0000000000000006e-12 f_prime_approx 6.0005334034940425 approx_error 0.000533403494042517
epsilon 1.0000000000000007e-13 f_prime_approx 6.004086117172843 approx_error 0.004086117172843018
epsilon 1.0000000000000008e-14 f_prime_approx 6.217248937900871 approx_error 0.2172489379008713
epsilon 1.0000000000000009e-15 f_prime_approx 5.329070518200747 approx_error 0.670929481799253

In particular I get the same value you got for \epsilon = 10^{-14}, but you can see that the general trend is that things get better until we get to 10^{-9} and then they stabilize for a while and then go off the rails at 10^{-12} and get worse and worse. We’d have to do more work to really prove this, but my guess is that what is happening here is that we are hitting the limits of accuracy in 64 bit floating point. The resolution in the mantissa is between 10^{-15} and 10^{-16} (see the IEEE 754 article on Wikipedia for more details) and so what would work in the full infinite resolution of \mathbb{R} just can’t be achieved in a finite representation. In other words the mathematical concept of “limits” just doesn’t work beyond the resolution of whatever the representation is that you are using.

The other interesting experiment here would be to redo this using “two-sided” differences. Those are in theory more stable than the “one-sided” technique we are using here, but we’ll still hit a wall at some point before we get to 10^{-16}.

1 Like