Inaccuracy in compute_cost in DLS2 W3A1 Tensorflow_introduction

My code prints out

tf.Tensor(0.81028694, shape=(), dtype=float32)

While the expected output is

tf.Tensor(0.810287, shape=(), dtype=float32)

The test code is:

assert (np.abs(result - (0.50722074 + 1.1133534) / 2.0) < 1e-7), “Test does not match. Did you get the reduce sum of your loss functions?”

Note that “(0.50722074 + 1.1133534) / 2.0)” calculates to 0.81028707, so it’s 0.00000013 or 1.3e-7 from 0.81028694.

I believe the Assert Condition is too strict. “1e-7” can be changed to “1e-6” to make it pass.

Separately, I don’t understand why my answer is not exactly the same as the standard answer. If anyone has an idea, please let me know.

I’m guessing that you manually applied softmax, rather than using the from_logits argument to force the cost function to apply the softmax. That gives different behavior precisely because using from_logits is more numerically stable.

1 Like

Yes you are right. I tried to use “from_logits=True” instead and it passed the test! Thanks!

Great! Here’s an earlier thread about this same point. And here’s another one that is relevant.