Help: W2 E8 cannot find out why w/y prediction is wrong

Dear all,

Sorry for the mess - I sent very long on troubleshooting but still do not understand why. I passed all previous tests (and checked the public_tests.py manually) to make sure all previous functions passed the tests. However, my w & Y values are not right. The weird thing is that I was able to get the right results in the real training below

train accuracy: 98.08612440191388 test accuracy: 70.0

So it seems like the code is working but can’t pass the test. I am sure I did something wrong - could you please help? My final model code looks like:

{moderator edit - solution code removed}

Any pointers would be appreciated!

Best,
Shawn

Hi @yw5aj ,

If you have hard coded the value in those calling functions, than when a different set of values are used for testing, the results would not be as expected. I suggest you trace back and check all the functions to see if there is any hard coded value.

Hi @Kic ,

Thank you for your reply! I don’t think I hard coded any values. I did trace back and checked; on the other hand, if I hard coded any values, then my real training result won’t be 98.08% and 70.0%, right?

Hi @yw5aj ,

If all the helper functions have passed the unit tests and there is no hard coded values used in any of those helper functions, then the only other possibility is that the execution environment picked up some value somewhere. Your code for the model() function looks fine. So I would suggest to refresh the kernel and rerun all the code from start.

Hi Kic,

I also tried that and it didn’t work. Would you mind me sharing my lab ID with you and taking a look? It’s: ladefworjpzf

Shawn

Shawn

Hi @yw5aj

I cannot access your notebook. If the error your are referring to is to do with testing your own image, then please be aware that this notebook assignment is to give you some idea of what a simple logistic regression model is involved and the steps it take to produce a model. The model is trained on a small dataset and these images have all been pre-processed specifically for this assignment. So it has its limitation in terms of accuracy when used in predicting an image that is different from the images that it is trained on. That is because it has not learned those differences.

So if you have done all the graded exercises and passed, then don’t concern yourself too much on why the optional exercise doesn’t turn out right on the model’s prediction.

As you progress further into the specialisation, Course 2, you will be introduced to topics on how to improve the accuracy of a model so that it would generalise well for unseen data. So stay tuned.

Hi Kin,

No, the error I was referring to is the a few code blocks below the model function, where we really train the model to behave. The markdown notes said I should have an accuracy of close 100% on train set and 70% on dev set. If the model() implementation is problematic, I don’t think this will match.

I’m not worried about the accuracy - this one is high variance and we can regularize or find ways to resolve this. Yes I agree with you.

That said, I’d really love to have the score to 100% for this course if possible :slight_smile:

Shawn

Hi @yw5aj ,

The train accuracy you got is close but not the same as the expected. What are the cost outputs, it could throw light on what is happening by checking the cost after every 100 iterations.

Just to close the loop on the public thread, I DM’ed Shawn and was able to see the code and it turns out there was a problem in the predict function, which the unit test for predict didn’t catch. Although model_test did catch it. The learning doesn’t actually depend on predict, which is why the model code still worked and produced good answers in the actual training.

In terms of diagnosing this without looking at the code, I think if we had seen the failing test output from model_test, that should have been enough to point to predict as the source of the problem.