Course 4, week 4 Neural Style Transfer: train_step "Unexpected cost for epoch 0"

I came across this on Stackoverflow. python - Jupyter uses an outdated version of a function when asked to import from a file - Stack Overflow.

I know they have issues with imported functions from a Python file. However, my theory is, IPython might keep a cache for functions and variables. In this case, that cached version seems not to be updated even though I had complied the cells.

Another “conspiracy theory”, I blame on Python’s lazy evaluation. Seems like both **2 and tf.square, theoretically, has the same presentation in assembly. Therefore, Python just ignores the new version and doesn’t compile the new code. Hence, clear all seems to empty that cache and forces Python to start from scratch.

Oh, yes, I didn’t think about that aspect. It is definitely true that if you change one of the python files parallel to the notebook that is imported, it does not take effect until you restart the kernel and rerun the actual “import” command. But why would you be changing those? That’s a pretty dangerous thing to do. The grader is a black box and I literally have no idea whether it depends on the specific contents of, e.g., but I’ve got a bad feeling it does.

As to lazy evaluation, I don’t buy that theory. There’s no way **2 and tf.square would be seen as equivalent by the interpreter. Maybe at the very very leafest (if that’s a word) level, but nothing in the call graph above that is the same. Note that the operands to **2 are integer constants, so the interpreter will use python integer operations to implement that, not TF operations. At least I would bet you all the beer you can drink in one sitting that is true. Prost! :beer: :nerd_face:

BTW thanks for reminding me about the issue with changing python utility files. I think we need a topic on the FAQ Thread about that.

I just read over this whole thread and noticed that we didn’t really conclusively explain why it doesn’t work with tf.square but it does work with **2. There is one post on this thread that actually does describe the error here, but it’s worth also pointing to this other thread which gives a demonstration of the difference.

1 Like

This one frustrated me for a week. I was pretty sure I had the maths laid out correctly but tf.square and **2 both threw errors (and gave me the same *e^-10 value for J_style_layer.
Eventually I stopped using the “1/4…etc” format and just started with the second term (tf.reduce_sum) and divided that by the 4 *… etc.
Suddenly I have the right output.

Same error for me, but as described above, I ran “Restart & Clear Output” and suddenly “UNQ_C5” passed!

I noticed before passing that each time I reran “UNQ_C5”, the output values diminished. Is this possibly due to overwriting the global variables somewhere?

Welcome to the community.

I think this thread covers. Please refer it.