Discrepancy in Prediction Values with Huber Loss: Wrapper vs Class-based Execution in TensorFlow

Hello!

While working in the Huber Object Loss Lab, I encountered an issue I can’t quite understand. I got two different prediction values: [[18.668455]] when using the wrapper loss function, and [[4.4036636]] with the class-based loss.

You can find more details in the notebook below.

Note: When I ran the notebook on the platform’s Jupyter environment, I got the expected values. However, the issue occurs when I execute the notebook on my local machine.

I’m using TensorFlow 2.16.2 with the M1 accelerator.

C1_W2_Lab_2_huber-object-loss.ipynb (7.1 KB)

is this graded assignment?

Non , it’s a Lab.
This one :

can you share a screenshot of what you got when you run down in your local computer and also when you run down in course provided environment!!

regards
DP

In my local machine

  1. Wrapper loss function :

    l

  2. Class-based loss :

In the notebook jupyter in the platform :

  1. Wrapper loss function :

  2. Class-based loss :

the class based loss you have threshold of 1.2 in your local environment and course environment mentions threshold as 1.02

also a bit variation for the wrapper loss is expected based on your system configuration, that much variation is normal but class based variation is because of your threshold value difference.

Regards
DP

Thank you, but adjusting the threshold to 1.2 or 1.02 in both approaches doesn’t change the outcome.

My question is why the two approaches, Wrapper-based Loss and Class-based Loss, which have almost identical code with only minor differences in implementation, don’t produce the same result when I run the notebook on my local machine. Based on the similarities, I expected them to yield the same outcome.


it is probably either than version difference working differently in your local environment.

can you check the version for tf and keras in both environment?

as we use keras to create the custom loss, this could be other reason

tf.keras.lossess also needs to be looked upon. chances huber loss is working differently in your local environment than in coursera environment.

I didn’t look at the assignment book yet, but the metadata need to be looked upon.

Small differences can happen due to random initialization of weights even if you retrain the same model multiple times. You could check by running the Sequential API model a few times.

chances are that tensorflow backend clear session is working differently in both environment where one is not freeing up the memory and one is.

To solve the problem I had to reshape xs and ys.
So I changed the line:

model.fit(xs, ys, epochs=500,verbose=0)

to :

model.fit(xs.reshape(-1, 1), ys.reshape(-1, 1), epochs=500,verbose=0)

I got the same prediction values when using either the Wrapper function Loss or the Class-based Loss.

I believe the labs and assignments for this specialization could benefit from some updates. As the field continues to evolve, it’s important that the coursework stays aligned with the latest advancements and industry practices. Updating the labs and assignments would ensure that students are working with current tools, techniques, and real-world scenarios, which would enhance their learning experience and better prepare them for future challenges in the field.

1 Like