Cell #8. Can't compile the student's code. Error: NameError("name 'value' is not defined")

Thanks Saif for the explanation. Since this is a pretty common question, I have added it to our FAQ in B8.

Cheers,
Raymond

Ran the cell before it and still come up with the error, the one before that runs fine as it confirms it’s all set and matched the expected outcome. Here’s the code throwing the error (Both are provided and advised to run, cannot edit):

<details>
<summary>
    <b>Expected Output: Cost     0.30, (Click to see details):</b>
</summary>

    # With the following settings
    np.random.seed(1)
    initial_w = 0.01 * (np.random.rand(2) - 0.5)
    initial_b = -8
    iterations = 10000
    alpha = 0.001
    #

Iteration 0: Cost 0.96
Iteration 1000: Cost 0.31
Iteration 2000: Cost 0.30
Iteration 3000: Cost 0.30
Iteration 4000: Cost 0.30
Iteration 5000: Cost 0.30
Iteration 6000: Cost 0.30
Iteration 7000: Cost 0.30
Iteration 8000: Cost 0.30
Iteration 9000: Cost 0.30
Iteration 9999: Cost 0.30

Cell block before that one:

def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters, lambda_): 
    """
    Performs batch gradient descent to learn theta. Updates theta by taking 
    num_iters gradient steps with learning rate alpha
    
    Args:
      X :    (ndarray Shape (m, n) data, m examples by n features
      y :    (ndarray Shape (m,))  target value 
      w_in : (ndarray Shape (n,))  Initial values of parameters of the model
      b_in : (scalar)              Initial value of parameter of the model
      cost_function :              function to compute cost
      gradient_function :          function to compute gradient
      alpha : (float)              Learning rate
      num_iters : (int)            number of iterations to run gradient descent
      lambda_ : (scalar, float)    regularization constant
      
    Returns:
      w : (ndarray Shape (n,)) Updated values of parameters of the model after
          running gradient descent
      b : (scalar)                Updated value of parameter of the model after
          running gradient descent
    """
    
    # number of training examples
    m = len(X)
    
    # An array to store cost J and w's at each iteration primarily for graphing later
    J_history = []
    w_history = []
    
    for i in range(num_iters):

        # Calculate the gradient and update the parameters
        dj_db, dj_dw = gradient_function(X, y, w_in, b_in, lambda_)   

        # Update Parameters using w, b, alpha and gradient
        w_in = w_in - alpha * dj_dw               
        b_in = b_in - alpha * dj_db              
       
        # Save cost J at each iteration
        if i<100000:      # prevent resource exhaustion 
            cost =  cost_function(X, y, w_in, b_in, lambda_)
            J_history.append(cost)

        # Print cost every at intervals 10 times or as many iterations if < 10
        if i% math.ceil(num_iters/10) == 0 or i == (num_iters-1):
            w_history.append(w_in)
            print(f"Iteration {i:4}: Cost {float(J_history[-1]):8.2f}   ")
        
    return w_in, b_in, J_history, w_history #return w and J,w history for graphing

Since all of the code in gradient_descent() was provided with the notebook, the implication is that there’s a problem either your compute_cost() or compute_gradient() functions.

Or, there’s no problem at all, because if you look in the notebook, it says this:
image

@Antonio_Lopez_Jr, please post a screen capture image that shows the error message you’re discussing.

I don’t see a clear statement here which error you’re seeing specifically.

My understanding is there is no error as it all checks out, the codes I shared were provided to run. These errors are what block me from passing the assignment.

TypeError                                 Traceback (most recent call last)
<ipython-input-30-0b0e9eefc342> in <module>
      8 
      9 w,b, J_history,_ = gradient_descent(X_train ,y_train, initial_w, initial_b, 
---> 10                                    compute_cost, compute_gradient, alpha, iterations, 0)

<ipython-input-27-4e7b77bafa3d> in gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters, lambda_)
     32 
     33         # Calculate the gradient and update the parameters
---> 34         dj_db, dj_dw = gradient_function(X, y, w_in, b_in, lambda_)
     35 
     36         # Update Parameters using w, b, alpha and gradient

TypeError: compute_gradient() takes 4 positional arguments but 5 were given

Did you modify the first line of the compute_gradient() function?
It should look like this:
image

I guess not, that did the trick. Not sure why I didn’t change that first line.

Thanks for your help!

That’s how it should have been when you first opened the notebook.