Assertion Error in Linear Regression Practice Lab

Greetings.
I’m seeking help for a problem I’m having on the linear regression practice lab. Although my code runs, I get the answer expected, and everything seems to be indented correctly, I continue to receive the following error:

AssertionError Traceback (most recent call last)
in
9 # Public tests
10 from public_tests import *
—> 11 compute_cost_test(compute_cost)

~/work/public_tests.py in compute_cost_test(target)
9 initial_b = 3.0
10 cost = target(x, y, initial_w, initial_b)
—> 11 assert cost == 0, f"Case 1: Cost must be 0 for a perfect prediction but got {cost}"
12
13 # Case 2

AssertionError: Case 1: Cost must be 0 for a perfect prediction but got 100.16431487582679

Any thoughts would be highly appreciated.

1 Like

Which course is this? I am guessing you are not implementing the cost function properly or maybe the helper functions above it, if any!

1 Like

Sorry - I should have mentioned a couple of things:
a) This is Andrew Ng’s course on machine learning, week II; and
b) I have virtually no experience with Python
Thanks,
Kevin Lindsey

1 Like

But the cost function works as expected (i.e. the answer it spits out is correct) with the given test data.

1 Like

There is more than one test of your compute_cost() code. It is failing one of them.

I recommend you attend a Python short-course. The notebooks assume you already have an essential understanding of Python syntax and coding.

1 Like

Let me flog this horse one last time before I sign up for a Python course.
The algorithm couldn’t be any easier. It simply requires that I:

a) compute f(wb) for each of the 97 examples in x_train;
b) subtract the corresponding y from each f(wb), then square the result;
c) add together all of the results, then divide the final sum by 2m

I don’t understand how there could be any additional tests that this code would fail. Are they looking for data with specific names, such that I’m mis-naming the results and confusing the tests? I’m genuinely curious as to why this simple code - which works fine on the test results - isn’t up to snuff on whatever other tests are being applied.
Thanks,
Kevin

1 Like

The grader only looks at the values returned by your function. It doesn’t inspect your code, just the return values. It uses many different tests, not just the ones you can see in the notebook.

The most common mistakes by those with minimal Python experience are:

  • incorrect indentation
  • Inside a function, the use of global variables instead of function parameters.
  • use of fixed-constant values instead of variables.
1 Like

I fixed the problem. Seems I was using the wrong names for x and y (I called them “x_train” and “y_train” instead of simply “x” and “y”, which apparently is what screwed up the subsequent tests).
Can I assume that we’re allowed to use an AI chatbot to help fix our codes for this course?
Thanks,
Kevin

1 Like

Good catch.

You can use whatever tools you wish.

You will learn more if you don’t lean on AI for the fundamentals.

2 Likes

Completely agree. However, AI can be useful for those times when you know your code isn’t running because you’ve overlooked something dumb and pretty obvious.

You wont learn anything about debugging and code execution by reverting back to AI to give you the answer as to what you’ve overlooked.