The simple model is a bit better in the training set than regularized model, I see, 0.077 < 0.070.
But why it worse in the crossval set? Regularized(0.81) > Simple(0.078)
Hi @Ali_Zeynalli, I got a different set of error values from yours for all models and sets of data.
Hi @Ali_Zeynalli,
Thank you for sharing your notebook with me. I ran your notebook from start up to Exercise 5, and I find that your notebook shows the result as mine, and your notebook is showing numbers that are not the same as what your screenshot shows.
Please try the following: Click âKernelâ > âRestartâ. This step will restart your python kernel so variables in the memory will be reset, but your code will not be reset. After restarting the kernel, please run your notebook from the beginning to Exercise 5 and see if you find a different error results.
Raymond
I restarted the kernel, but it didnât help. I am getting the same results as in my screenshot.
Also, I have tried to run my notebook via browser, and after running cell with:
model_r.summary()
model_r_test(model_r, classes, X_train.shape[1])
I am getting âAttributeError: Layer ComplexRegularized is not connected, no input to return.â. It is not a fatal error, I mean if I ignore it, I still able to use my model and predict. Maybe that is the reason?
Hi @Ali_Zeynalli,
I donât see this error either. Can you help do 2 things?
#1 At the bottom of your assignment notebook, after the congratulations message, there is an empty code cell, please copy the following code lines to the empty code cell, and run that cell:
import tensorflow as tf
print('tf', tf.__version__)
import numpy as np
print('np', np.__version__)
import sklearn
print('sklearn', sklearn.__version__)
Please share the printed messages.
#2. Click âFileâ > âOpenâ. Click the check box next to the file âassigment_utils.pyâ, and click âDownloadâ. Next, do the same for the file âpublic_tests_a1.pyâ. Then please send me the 2 files in the direct message. I would like to take a look at those too.
Check the file:
A âDownloadâ button will show up after you check one and only one file.
Thank you.
Raymond
Hello @Ali_Zeynalli, thank you very much for providing everything I needed. Here is what I found:
- Your two .py files are the same as mine
- Your notebook can run fine on my coursera environment, without any error
- Based on the versions of your tensorflow, sklearn, and numpy, your environment seems to be different. Here are my versions on coursera:
tf 2.8.0
np 1.21.6
sklearn 0.22.2.post1
- The lab was designed and tested to run properly on the coursera environment, other than that the behavior might be unexpected. We canât support all the different environments so we suggest to work on the labs on coursera.
- I did a quick check on your versions of numpy and sklearn, and I think they are not the cause of the problem you saw. However, I canât check your version of tensorflow because itâs not compatible with my own python environment, and I canât rule out that to be the cause.
- IF your version of tensorflow made the neural networkâs weights to be initialized to a different set of values, that can explain the difference in the errors, but again I canât verify that.
Lastly, everyoneâs coursera environment should be the same, but yours appears to be different from mine. If you did the version check and ran the notebook on other machine, then that can explain the difference in the versions, and in this case I would just suggest you to run it on coursera and the problems in your post should go away.
However, if you did the version check and ran the notebook both on coursera, and resulted in the versions that you sent me and resulted in the different errors values that you posted here, then please let me know. I will then let the course team know about it. I would be pretty surprised if your coursera has a different versions of tensorflow, numpy and sklearn, and if it is true, it is not good and we should report it. Thank you.
Yes, you are right. In the Coursera environment, everything works as it should. Thank you for your help.
You are welcome @Ali_Zeynalli!