Course 1, Week 2, Exercise 5 Output

Exercise 5 - propagate

I think I finally figured this one out, but I have a question of curiosity about the output.
For debugging and learning purposes, I put in a lot of print statements to track my variables and dimensions. When I run the exercise validation code In[29], it seems to run through my code twice. The first time, my code produces the Expected Output. So I think I’m good, but the second time through, the output is different. Is this the behavior expected? (Don’t worry about all the intermediate print statements checking things, just asking about if the second run is expected)

** HERE’S MY PRINT STATEMENTS (OUTPUT)
b: 1.5, shape: ()
X: [[ 1. -2. -1. ]
[ 3. 0.5 -3.2]], shape: (2, 3)
Y: [[1 1 0]], shape: (1, 3)
w: [[1.]
[2.]], shape: (2, 1)
m: 3, shape: ()
wT: [[1. 2.]], shape: (1, 2)
wX: [[ 1. -2. -1. ]
[ 6. 1. -6.4]], shape: (2, 3)
wTX: [[ 7. -1. -7.4]], shape: (1, 3)
wXb: [[ 2.5 -0.5 0.5]
[ 7.5 2.5 -4.9]], shape: (2, 3)
wTXb: [[ 8.5 0.5 -5.9]], shape: (1, 3)
A: [[0.99979657 0.62245933 0.00273196]], shape: (1, 3)
A: [[0.99979657 0.62245933 0.00273196]], shape: (1, 3)
yLogA: [[-2.03447672e-04 -4.74076984e-01 -0.00000000e+00]], shape: (1, 3)
secondTerm: [[-0. -0. -0.0027357]], shape: (1, 3)
cost1: 0.15900537707692405, shape: ()
cost3: 0.15900537707692405, shape: ()
cost2: [[0.15900538]], shape: (1, 1)
AmY: [[-2.03426978e-04 -3.77540669e-01 2.73196076e-03]], shape: (1, 3)
AmY: [[-2.03426978e-04 -3.77540669e-01 2.73196076e-03]], shape: (1, 3)
AmYT: [[-2.03426978e-04]
[-3.77540669e-01]
[ 2.73196076e-03]], shape: (3, 1)
AmYT: [[-2.03426978e-04]
[-3.77540669e-01]
[ 2.73196076e-03]], shape: (3, 1)
XAmY: [[-2.03426978e-04 7.55081338e-01 -2.73196076e-03]
[-6.10280934e-04 -1.88770334e-01 -8.74227444e-03]], shape: (2, 3)
XAmYT: [[ 0.75214595]
[-0.19812289]], shape: (2, 1)

** HERE’S THE EXPECTED OUTPUT , FROM PRINTING WITHIN MY CODE**
dw: [[ 0.25071532]
[-0.06604096]], shape: (2, 1)
dwType: <class ‘numpy.ndarray’>
db: -0.12500404500439652, shape: ()
End my Code
** HERE’S WHERE THE FIRST RUN ENDS

** EXPECTED OUTPUT
dw = [[ 0.25071532]
[-0.06604096]]
db = -0.12500404500439652
cost = 0.15900537707692405

** HERE’S THE PRINT OUTPUT FROM AN APPARENT SECOND RUN **
b: 2.5, shape: ()
X: [[ 1. 2. -1. 0. ]
[ 3. 4. -3.2 1. ]
[ 3. 4. -3.2 -3.5]], shape: (3, 4)
Y: [[1 1 0 0]], shape: (1, 4)
w: [[ 1.]
[ 2.]
[-1.]], shape: (3, 1)
m: 4, shape: ()
wT: [[ 1. 2. -1.]], shape: (1, 3)
wX: [[ 1. 2. -1. 0. ]
[ 6. 8. -6.4 2. ]
[-3. -4. 3.2 3.5]], shape: (3, 4)
wTX: [[ 4. 6. -4.2 5.5]], shape: (1, 4)
wXb: [[ 3.5 4.5 1.5 2.5]
[ 8.5 10.5 -3.9 4.5]
[-0.5 -1.5 5.7 6. ]], shape: (3, 4)
wTXb: [[ 6.5 8.5 -1.7 8. ]], shape: (1, 4)
A: [[0.99849882 0.99979657 0.15446527 0.99966465]], shape: (1, 4)
A: [[0.99849882 0.99979657 0.15446527 0.99966465]], shape: (1, 4)
yLogA: [[-0.00150231 -0.00020345 -0. -0. ]], shape: (1, 4)
secondTerm: [[-0. -0. -0.16778603 -8.00033541]], shape: (1, 4)
cost1: 2.0424567983978403, shape: ()
cost3: 2.0424567983978403, shape: ()
cost2: [[2.0424568]], shape: (1, 1)
AmY: [[-1.50118226e-03 -2.03426978e-04 1.54465265e-01 9.99664650e-01]], shape: (1, 4)
AmY: [[-1.50118226e-03 -2.03426978e-04 1.54465265e-01 9.99664650e-01]], shape: (1, 4)
AmYT: [[-1.50118226e-03]
[-2.03426978e-04]
[ 1.54465265e-01]
[ 9.99664650e-01]], shape: (4, 1)
AmYT: [[-1.50118226e-03]
[-2.03426978e-04]
[ 1.54465265e-01]
[ 9.99664650e-01]], shape: (4, 1)
XAmY: [[-1.50118226e-03 -4.06853956e-04 -1.54465265e-01 0.00000000e+00]
[-4.50354677e-03 -8.13707912e-04 -4.94288848e-01 9.99664650e-01]
[-4.50354677e-03 -8.13707912e-04 -4.94288848e-01 -3.49882627e+00]], shape: (3, 4)
XAmYT: [[-0.1563733 ]
[ 0.50005855]
[-3.99843238]], shape: (3, 1)

*** MY PRINTED OUTPUT FROM SECOND RUN (DIFFERENT FROM ‘EXPECTED OUTPUT’)
dw: [[-0.03909333]
[ 0.12501464]
[-0.99960809]], shape: (3, 1)
dwType: <class ‘numpy.ndarray’>
db: 0.288106326429569, shape: ()
End my Code

All tests passed!

Nevermind this! I just found W2_Assignment_public_tests.py, which explains why my code was executing twice.

It’s good that we can see the test code!

@inposition, also, I think it’s a fact that when you run a model cell that is already trained on an existing model or the weight values then it’s very usual that it will throw a large impact on the outputs.

@paulinpaloalto, kindly share your insights too. Thank you!

Hello, and thanks for your insights. I want to ensure I understand what you’re saying. I think you mean that the second run of that’s performed in public_tests.py demonstrates your point. The first run tuned the model so to speak, so when the second run iterated, the lost values converged more quickly. Is that correct? I did notice that the loss values (b) were larger at first, but very quickly dropped.

I don’t know what Rashmi’s point is. I think you have figured out what is happening:

There are two separate tests: one that you see right there in the notebook and then another second test that is in the file public_tests.py. The two tests are completely independent and there is no “training of any models” going on here. These are “unit tests” that treat your propagate function as the “function under test”. You’ll frequently see that they have more than one test for any given function. That’s just normal SQA practice. If you write only one test for a given function, you’re really not testing it very thoroughly. For example, you’ll notice that the dimensions of the input objects are different in the two cases. Sometimes you can pass a single test in ways that do not “generalize”, e.g. you made some fixed assumption about one of the dimensions.

That said, what they do here in the notebooks is nothing remotely approaching what a real SQA engineer would do with functions like these. It is frequently the case here that you can pass all the unit tests in one of the notebooks, but then fail the grader. So it is a necessary but not sufficient condition to pass the grader that you pass the test cases in the notebook.

@inposition, Yes, indeed and @paulinpaloalto sir has provided a great insight, which I couldn’t perceive!

But, one thing I couldn’t understand @paulinpaloalto sir, how do these SQA practices impact the true values and why do they generate different outputs? Do they perform some real calculations in the back? I mean this is so vague.

For the tests in the notebook, it’s not vague at all: the code is there for you to examine. Just click “File → Open” and then open the file public_tests.py and read the code for the additional tests. We are writing functions here that take multiple inputs, if you change the values and shapes of the inputs, you get different outputs, right?

But the grader test cases are not visible, so you can’t examine them. If you could then it might be possible to “cheat” by hard-coding things to pass exactly the test cases they have. Just to give a trivial example, if you knew that the answer was supposed to be 42, you could pass the grader by writing this:

def myFunction(inputValue):
    return 42

Note that the grader does no analysis of your source code. It doesn’t care which functions you use or whether you used for loops when you could have used vectorized constructs. All it cares about is the answers your function gives with particular inputs. So what the grader does is run its own test cases which are different than the test cases you can see. They are similar in concept, but have different inputs and (hence) different outputs. Make the most out of the tools that you have for debugging your code by watching how your functions work on the tests in the notebook.