Greetings!!
I am unable to understand what is wrong with my implementation of the predict_and_sample function. All functions upto that one pass all the tests.
However, when the following code given in the notebook is executed, I am getting results that are different from the expected ones.
Oh, sorry, there must be something wrong, because results and indices are just two different ways to present the output and they should agree, meaning that if the indices[12] is array([79]), then argmax(results[12]) should be 79 and not 0.
Are you sure you followed the instructions for how to convert the index values to the results values?
my results are still incorrect. I think I am not correctly converting pred, a list, to numpy array. Do I need to convert pred to an numpy array before passing it to np.argmax?
Yes, you do, but your indices values are already correct and that’s where you’d have to do that conversion. Or did you use pred instead of indices as the input to to_categorical? FWIW here’s my output from that test cell with a few added prints to show shapes and types
That all sounds correct: you don’t need any preprocessing on the initializer values and now your results and indices match. So I think everything is ok and it’s just the point that I made in my first reply: the results here are not reproducible. Try submitting to the grader and I’ll bet you get full marks.
That’s good news, so your code must be correct. So I guess we’re back to the statement they made about the results not being always the same. I notice that they did not set the random seeds anywhere in this assignment, which is something they normally do in order to get reproducible results for ease of testing and grading. I tried a few experiments and if I just rerun only the test cell for predict_and_sample multiple times, I get the same answer every time. But if I do “Kernel → Restart and Clear Output” and then run everything again, I get different answers. So it must be that the training of the model is starting from random initializations and we don’t end up with exactly the same weights every time. You could try some experiments by adding code to set the random seed before the training and see if that has any effect on the reproducibility of the output.