W4_A1_Ex-4 got numpy.ndarray but expected tuple

I am absolutely scratching my head on this assignment. Why in the world is the test looking for a tuple type?

Also, the notes suggest using: A, activation_cache = sigmoid(Z)
But the python syntax would make it such that A = the first element and activiation_cache = the second element of the list created by sigmoid(Z). Am I crazy?

Thank you anyone who could demystify for me.

Error log below:

With sigmoid: A = [[0.96890023 0.11013289]]
With ReLU: A = [[3.43896131 0. ]]
Error: Datatype mismatch with sigmoid activation in variable 0. Got type: <class ‘numpy.ndarray’> but expected type <class ‘tuple’>
Error: Wrong shape with sigmoid activation for variable 0.
Error: Wrong shape with sigmoid activation for variable 1.
Error: Wrong shape with sigmoid activation for variable 2.
Error: Wrong output with sigmoid activation for variable 0.
Error: Wrong output with sigmoid activation for variable 1.
Error: Wrong output with sigmoid activation for variable 2.
Error: Datatype mismatch with relu activation in variable 0. Got type: <class ‘numpy.ndarray’> but expected type <class ‘tuple’>
Error: Wrong shape with relu activation for variable 0.
Error: Wrong shape with relu activation for variable 1.
Error: Wrong shape with relu activation for variable 2.
Error: Wrong output with relu activation for variable 0.
Error: Wrong output with relu activation for variable 1.
Error: Wrong output with relu activation for variable 2.
0 Tests passed
6 Tests failed

Have a look at the test case. You can click “File → Open” and then open the file public_tests.py. You’ll see that it checks the cache values as well and those are tuples, right?

Actually I took a look at the test case and what it does is supply one variable to receive the return values of linear_activation_forward, so that should be a tuple with two elements: the activation output (an nd.array) and the cache (a tuple).

So what this probably means is that you changed the “return” logic of the function so that it returns only the sigmoid or relu output. That was part of the template code and should not have needed to be changed.

I’m just guessing here, but it’s something along those lines.

Thanks for the help! Still stuck… I feel like I am missing something stupid. I did not change the return function.

The thing I am scratching my head on is really the following:

{moderator edit - solution code removed}

The first set of code is what the notes appear to suggest. But it does not make sense based on my knowledge of python syntax, and it gives me the expected error of cannot unpack.

The second set of code gives me all kinds of issues…

For starters, you should be calling linear_forward instead of manually implementing it here. Notice that the RHS of that assignment statement only generates one output, right? So how does that make any sense for the LHS to be a tuple? Note that linear_forward returns two values.

And what does this do?

Z = linear_cache = np.dot(W, A_prev) + b

After that both Z and linear_cache will have the same value, right? That’s not what you want.

I think the fundamental issue here is missing the point about calling linear_forward. We’re building the functions in a modular way here.

instead of implementing Z, try to use linear_forward function that returns Z and linear_cache.