Issue with training loss

My training loss doesn’t reduce significantly. I have tried tuning the hyperparameters of the optimizer but the loss function never goes below approximately 1.35 (as can be seen below).

Start fine-tuning!
batch 0 of 100, loss=1.4242209
batch 10 of 100, loss=1.4164603
batch 20 of 100, loss=1.4089508
batch 30 of 100, loss=1.401805
batch 40 of 100, loss=1.3950164
batch 50 of 100, loss=1.3885505
batch 60 of 100, loss=1.3823652
batch 70 of 100, loss=1.3764206
batch 80 of 100, loss=1.3706813
batch 90 of 100, loss=1.365118
Done fine-tuning!

Up until exercise 10 all my outputs match the expected outputs. So I am a bit unsure what could be the issue here.

Hi @alexander,

Please check this thread,

The problema was a mispelling in a variable inside the box_predictor_checkpoint.

I hope it helps!

1 Like