Hello,

Issue: Although same code works fine for loop “relu”, it is failing for “sigmoid” loop at line:

Partial Code:

elif activation == “sigmoid”:

#(≈ 2 lines of code)

# dZ = …

# dA_prev, dW, db = …

# YOUR CODE STARTS HERE

dZ = sigmoid_backward(dA, activation_cache)

```
#ERROR LINE:
```

#–>> dA_prev, dW, db = linear_backward(dZ, cache)

# YOUR CODE ENDS HERE

Error:

ValueError: not enough values to unpack (expected 3, got 2)

(Attached pictures)

Query:

If both “relu” and “sigmoid” are calling linear_backward, why ONLY “sigmoid” loop is failing.

Sincerely,

A

1 Like

Anurag,

There are two issues

- this code has incorrect cache dA_prev, dW, db = linear_backward(dZ, cache)

cache – tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently.

- As you from linear backward,

cache – tuple of values (A_prev, W, b) coming from the forward propagation in the current layer

But in your case there is 1 missing value in grader cell

GRADED FUNCTION: linear_backward

def linear_backward(dZ, cache): Expected is 3 but got 2

So make sure you have got the correct return for the b

db – Gradient of the cost with respect to b (current layer l), same shape as b

Check your code db in

GRADED FUNCTION: linear_backward

def linear_backward(dZ, cache):

Hint refer the image, notice the difference in formula for dW and db

Regards

DP

Thanks Deepti for the response.

I tried, and probably running out of ideas here for db.

Based on above mathematical equation, below has been implemented programmatically:

*(-1/m) => Implemented.*

*Summation(from 1 to m) → np.sum => Implemented.*

*Variable dZ → dZ => Implemented.*

Query: Is there anything missing out mathematically?

Side-Note: I have also referred to previous examples in Week3 (E5 (Computer Cost) , E6 (Backward_propagation)).

And, have followed same steps.

I also believe np.dot, and np.squeeze will not work here.

Sincerely,

A

1 Like

I think you misinterpreted what Deepti said. Look again at the way you call `linear_backward`

. You are passing the wrong value as the `cache`

argument there. Which cache should it be? The error message was actually quite specific: it said it was expecting 3 elements, but it got only 2. So how could that happen?

1 Like

Hello Paul,

Thanks. Below is my assumption:

**> Exercise 7:**

*def linear_backward(dZ, cache):*

Above means Two variables are called, and Three are returned.

In my code, I am calling Two variables:

dA_prev, dW, db = linear_backward(dZ, cache)

From what I learned in C, Pascal, Matlab, if I pass on Three variables in above case, there could be run-time/compilation error.

…A little bit further suggestion would immensely assist.

Sincerely,

A

1 Like

Yes, you must pass two variables as the arguments to `linear_backward`

. The point is that what you are passing as the *second* argument is wrong: it is a 2-tuple. How did that happen? It should be a 3-tuple.

You need to look at the error message and where the exception is being “thrown”. The first step in debugging is always to understand what the error message is telling you. It is telling you what is wrong on a very specific line of code. Then you need to work backwards to figure out why that happened.

1 Like

Thanks a lot Paul for pointing the detail that I could not capture.

Is there a Text-Book related to the course that could assist in understanding the statistical aspects of Neural network model in the course.

It would help me attain better understanding.

Sincerely,

A

1 Like

Here’s a thread that lists quite a few different books that discuss ML/DL topics. I’m not sure which one would be the most applicable for your question about statistical aspects of ML, but the book by Goodfellow et al is one of the more mathematical.