Week 4, assignment 2, train_step, error in function compute_style_cost

Hi, To resolve the error “cannot compute Mul as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:Mul]” in compute_layer_style_cost, I added numpy() in reduce_sum and it resolved the above error. Alternatively, tried to cast reduce_sum to float64 and this also resolved the above error and made the test pass at compute_layer_style_cost. But when it comes to train_step, the error is thrown in compute_layer_style_cost fn call that ‘Tensor’ object has no attribute ‘numpy’. When tried type-casting in compute_layer_style_cost, the error in train_step becomes “Unexpected cost for epoch 0: -318755264.80295044 != 25629.055”. I have gone through all the discussion topics, unfortunately could not find a proper solution. Please help.

1 Like

It is not a good idea to use numpy functions when you’re working in TF. The reason is that gradients are computed in TF automatically: you don’t have to worry about writing the back propagation code. But if you include numpy functions in the compute graph, then the gradient computations will fail because only TF functions have the “autograd” logic.

You can use numpy in places that are not involved in the gradients, though. E.g. computing the constant values involving fractions and the dimensions. But there you need to be careful to do everything in floating point. Use 1. instead of just 1 or 4. instead of 4 … Had to be careful with the punctuation there not to confuse the issue further! :laughing:

1 Like

There should be no need to coerce types anywhere in your code.

On the point about the floating point constants, did you find this thread? Anytime someone ends up with a negative cost value, this is the first thing to check. There’s no way costs should ever be negative, right?

2 Likes

Indeed I had tried the solution in the thread, but partially! I tried tf.math.divide instead of “/”, but didn’t change 'tf.square into **2. Now tried as per the solution and worked like a charm :). Thank you very much Paul.