C3W1_Assignment Excercise 4, Training model, i get this error


AttributeError Traceback (most recent call last)
Cell In[144], line 2
1 ## UNIT TEST
----> 2 w1_unittest.test_compile_model(compile_model)

File /tf/w1_unittest.py:164, in test_compile_model(target)
162 optimizer = model.optimizer
163 assert type(optimizer) == tf.keras.optimizers.Adam, “Wrong optimizer”
→ 164 assert np.isclose(optimizer.learning_rate.numpy(), 0.00125), f"Wrong learning_rate. Expected {0.00125} but got {optimizer.learning_rate.numpy()}."
166 print(“\n\033[92mAll test passed!”)

AttributeError: ‘float’ object has no attribute ‘numpy’

By any chance you edited a part of code you weren’t suppose to??

any codes or instructions given outside of markers ###START AND END CODE HERE#### aren’t suppose to be edited/deleted/added with any of your written instructions.

Please confirm this as your error is stating you have used Adam optimizer which is wrong optimizer of choice here.

I am supposed to use adam optimizer here in this assignment

You have not added learning_rate=0.00125 to your Adam optimizer, which is throwing this error as per given instructions in the assignment.

Why are you using name = log_softmax?

the instruction only explains why the choice of loss was made for sparsecategoricalcrossentropy as we use log_softmax in the GRULM model, you need to add that to loss

i removed that and yet i get the same error

then your other grade cell codes need to be looked upon

not sure, the other cells passed for all tests

passing all the tests doesn’t mean your codes might be correct.

basically I am suspecting in one of the codes where numpy function was required to use but you haven’t used in some place.

So a thorough code review will be required.

The grader uses different tests.

actually, after a long review of the code i figured out where the problem was. I set the activation = log_max instead of tf.nn.log_max in the GRULM model. Thanks though.