I have not taken any of the TF specializations, but if you are building a custom loss function, then the whole point of that is it will be used to drive the gradients for back propagation. In TF all that is handled for you automatically using “autodiff” and only TF functions support that. The complete compute graph from the parameters to the cost needs to be composed only of TF functions in order for the automatic generation of gradients to work, because numpy functions don’t have the “autodiff” logic. Here’s an article on the TF website about how this works.
Here’s a recent thread from DLS about this point that shows the type of error message you get when your gradients can’t be computed in this way. Look at the first post on the thread to see the error message and then read on to see Raymond’s excellent explanation.