Week 4 assignment fail despite showing 99.9% in the lab

Hi folks,

I don’t know how to get round this without cribbing the architecture maybe.

I show 99.9% in the colab after 10 epochs.

However the grader says its 93.4%.

Hie on earth do incorrect this given the colab isn’t showing me such a result? In other words, I can’t course correct it as the colab isn’t giving the same results as the grader?!

Any advice I’d be grateful for.

Thank you

1 Like

W are not using Colab for DLS courses but Jupyter Notebook. And if it shows 99.9% after 10 epochs, it would be your model accuracy, not your assignment score. If grader says 93.4%, it’s your assignment score. It also gives you feedback. Check it out.

Best,
Saif.

The grade score was 67%.
The accuracy was reported as 93%.

The jupyter notebook is opened in Google colab. I’m doing this course via coursera, so there may be a difference if there are other ways to be on this course?

I’ll send a screenshot perhaps when I try again later.

Anyway my point is, the model accuracy is reported differently in the notebook for some reason, so I’ve no baseline to check whether I’m doing it better or worse if I can’t tie the model accuracy back to whatever the grader process is doing. I’ve passed the other three assignments in this first module, so it was fine previously.

Please move your topic to the correct subcategory i.e. Tensorflow Developer Professional Course 1 Week 4.
Here’s the community user guide to get started.

Apologies, I checked the guide, it didn’t mention the sub categories. Is there a simple way (I’m in mobile just now) to shift the topic please?

I wasn’t aware of further sub categorisation! :slight_smile:

Thank you for your time.

I’ve changed it for you this time. Here’s the link within the user guide.

1 Like

Thank you, I appreciate your help!

Your model looks fine. Although the callback is getting triggered, model training doesn’t stop once the desired performance is achieved. So, training.

See this for more hints on how to stop training.

Also, it’s important to account for randomness and ensure that the model performance is better than the cutoff for the epoch limit.

Thank you Balaji. What is puzzling me is getting 99.9% in the training in 10 epochs here, yet in the grading, seeing 93.45%.

I’m not able to see what it is I can tweak as I don’t get the same feedback on the colab itself as the grading is yielding. In other words, to find out if I got closer,I’d have to risk more failing submissions? Am I missing a trick?

You’re welcome. As mentioned in my previous reply, the model doesn’t stop training once the cutoff is reached.

Please look at this reply where the image you’ve shared shows that the cutoff has reached in the 10th epoch but the model seems to continue to train.

The grader feedback iterates my point: model trained for 15 epochs or more before stopping.

If you believe there’s nothing wrong with your work, please click my name and message your notebook as an attachment.

Thank you. I have started again as I believe my first result must have been a fluke - I’ve not been able to repeat that result since, which is what the grader must have found. I’m not sure how on earth it gave me 99.9% after 10 epochs because now, well I’ve tried a few variations with other layers and change in learning rates, and LOL, getting nothing like that first epic result.

So, I’ll persist until I can get the same type of result I had then try it again.

I don’t know if this type of behaviour is common but it certainly threw me. Especially as my other assignments worked so easily!

Thank you for your time and I’ll come back to you (unless I succeed!)

I see what the problem is now. It’s indeterminate - the same run can yield subtly different outcomes. This one, one I’ve submitted, has a 50% ish chance of succeeding so far. 4 runs, 2 failed, 2 succeeded well within the target. This is something I didn’t expect as I’d not done enough stuff with these. The earlier assignments, things ran fairly straightforward, however this one, it’s just more complex so the network bounces around a bit as I doubt I’ve hacked the exact perfect fit of layers and other numbers for the job.

Update: it passed 100% :slight_smile: Thank you!

Hey , congrats on passing the test, i have been trying to find the perfect combination of layers and parameters as well, but i get very varied accuracy from anywhere between 50-90 pct in 2-15 epochs. Can anyone guide me on the correct approach to get the desired result?

I can’t really say too much more than-in no specific order:

consider your layers - not too many or too few. The simplist model possible that can do the job is not a terrible beginner rule of thumb.

Consider dropout as a possibility - this takes experimenting so that the network doesn’t rely too much on any one neuron. Play with this.

Play with a learning rate schedule.

Ideally trial just one of these at a time before mixing.

Thats all I did.

Took me about five attempts.