In the week 2 quiz (and in labs and videos), we use tensorflow.keras.backend instead of numpy, for numpy-like functions (sqrt, sum, etc). What’s more, when I use numpy in the quiz, my code doesn’t work.

So my question is, why doesn’t numpy work in this context of a custom loss function?

I looked up tf.keras.backend in the API doc looking for a clue, all the functions are listed as “DEPRECATED”.

This relates to Coursera | Online Courses & Credentials From Top Educators. Join for Free | Coursera in particular the third code cell.

Which question of the Quiz is this, are you trying to reproduce it in python?

Perhaps its because that when you use Tensorflow built in functions they expect tensor types! Even though depreciated and the course is a bit old, they are probably replace with new functions in tensorflow!

can you share screenshot of the quiz question you are mentioning @Christopher_Dennis

I have not taken any of the TF specializations, but if you are building a custom loss function, then the whole point of that is it will be used to drive the gradients for back propagation. In TF all that is handled for you automatically using “autodiff” and only TF functions support that. The complete compute graph from the parameters to the cost needs to be composed only of TF functions in order for the automatic generation of gradients to work, because numpy functions don’t have the “autodiff” logic. Here’s an article on the TF website about how this works.

Here’s a recent thread from DLS about this point that shows the type of error message you get when your gradients can’t be computed in this way. Look at the first post on the thread to see the error message and then read on to see Raymond’s excellent explanation.

2 Likes