C4 W4 A1 who_is_it 0.0000003 difference in distance

Hi there,

As you can see I have a tiny discrepancy between my distance figure and the figure in the test cell. Is this just a bug? I’d like to check before I submit my assignment.

Please advise

0.5992946 is the correct value. Please click my name and message your notebook as an attachment.

Hi Balaji.

I have checked the other exercises and ‘verify’ is also incorrect by the same amount, so I will focus there and hopefully I’ll be able to find my bug.

Thank you

FWIW I checked my results for verify and get the same value you show:

with tf.norm dist = 0.599294900894165 
Dist for younes = tf.Tensor(0.5992949, shape=(), dtype=float32) 
It's younes, welcome in!

(<tf.Tensor: shape=(), dtype=float32, numpy=0.5992949>, True)

It turns out you can either use TF functions or straight numpy here and it works either way and the grader accepts either:

with np.linalg.norm dist = 0.599294900894165 
Dist for younes = 0.5992949 
It's younes, welcome in!

(0.5992949, True)

So I think the “expected values” shown in the notebook are actually wrong. Try submitting to the grader and I’ll bet you pass.

Hi Paul,

Thanks, you were absolutely right.
That’s a big relief: there isn’t very much there to change .

Hi Balaji,

The autograder passed me.

Exactly: there really aren’t that many moving parts there. I was wondering if maybe it could be a resolution problem if you did something that limited the computation to 32 bits, but just doing the natural thing in either TF or NP gives you 64 bit resolution.

Not sure why the numbers they show would be different, but maybe they changed the model somehow and forgot to update the notebook. I’ll file a bug on this …

Here are the runs on my machine:

#With `tf.norm(..., ord=2)`
(<tf.Tensor: shape=(), dtype=float32, numpy=0.5992946>, True)

#With `np.linalg.norm(..., ord=2)`
(0.5992946, True)

#With `tf.linalg.norm(..., ord=2)`
(<tf.Tensor: shape=(), dtype=float32, numpy=0.5992946>, True)

I think you are passing the tests because of this check:

>>> np.isclose(0.599294900894165, 0.5992946)
True

Okay, so is this a ‘float32 vs. float64’ issue?

Apparently. Balaji, did you do anything in your code to restrict the computation to float32? I added the ord = 2 argument to np.linalg.norm (which is the default, right?), and I still get the same answer Peter and I showed above.

So what is going on?

Didn’t restrict the computation to float32.

Default norm for numpy.linalg.norm is L2 for vectors.

Are either of you running things on GPU?

I am not running on GPU Balaji.

I added .astype(np.float32) to the input to np.linalg.norm and it makes no difference. I tried explicit tf.cast of the encoding to tf.float32 and nothing I’ve tried gets me the answer that Balaji shows.

Balaji, did you perhaps modify the img_to_encoding function in some way?

I’m running on the course website, so I’m guessing that does not use GPU.

If you are running this locally with a different version of TF or on Colab or the like, then that would explain a lot.

@paulinpaloalto I didn’t modify img_to_encoding function. I’m running locally.
Guess that would explain the difference in numbers.

Ah, ok, yes there is a lot of “versionitis” in TF and in “python world” in general. That would explain it. It probably means that when they published the notebook, they were using a different version of TF than the course currently uses as well.

Sigh.