Hi,
I’m getting the following error when I go to implement “gradient_check_n” code:
If I understand correctly, I should be passing theta_plus
to the vector_to_dictionary()
, not theta_plus[i]
because I want to convert the vector back to dictionary form – passing a scalar would not make sense. Why am I getting this error then?
Thanks,
MP
You filed this under “Deep Learning Resources”, so I moved it to “DLS Course 2”.
I think the problem is that forward_propagation returns two distinct return values, but you have assigned the return value to an element of J_plus. That is what it is complaining about. What python does if you assign multiple return values to a single variable is that it “helps you out” by converting all the return values to a python “tuple” and then assigns that to the single variable. In other languages (e.g. MATLAB), it would simply give you the first return value and discard the rest. It is a legitimate debate whether that would be more useful behavior, but we don’t really have a choice here: we are coding in python and it “is what it is”.
If you don’t care about the second return value from myFunction, you would write:
firstReturn, _ = myFunction()
Or you could make up a throw-away variable to hold the second value, but using _ to say “Don’t care” is more efficient.
1 Like