Hi Paul
This maybe a double posting - the first just disappeared on me.
Is there a loop inside your caller function for Ex 3.3?
The following
gives me this output - and I have created no higher level loops
Regards
Ian
Hi Paul
This maybe a double posting - the first just disappeared on me.
Is there a loop inside your caller function for Ex 3.3?
The following
gives me this output - and I have created no higher level loops
Regards
Ian
Hi, Ian.
Sorry, I don’t understand your point. Which exercise are you talking about? If you mean conv_forward, there are literally 4 levels of nested loop there. You also have to be very careful with the indentation: that is critical for defining the body of the loops in python, right?
The loop structure in conv_forward is this:
for every sample:
for every vertical position in the output space:
for every horizontal position in the output space;
for every output channel:
do the computation
Also note that the test cell for conv_forward invokes conv_forward more than once with different combinations of parameters.
Ah… this is possibly what I am seeing.
I was getting some very weird results when I printed the contents and shapes of a_slice_prev and other variables at various points.
I was not aware the routine was called 4 separate times with changing w, stride and pad
Would it be fair to suggest that this is mentioned in the assignment?
Regards
Ian
The notebook already provides you with all four for-loop constructs:
# for i in range(None): # loop over the batch of training examples
# a_prev_pad = None # Select ith training example's padded activation
# for h in range(None): # loop over vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
# vert_start = None
# vert_end = None
# for w in range(None): # loop over horizontal axis of the output volume
# Find the horizontal start and end of the current "slice" (≈2 lines)
# horiz_start = None
# horiz_end = None
# for c in range(None): # loop over channels (= #filters) of the output volume
Well, to be fair, you can just look at the test code. It’s in the file public_tests.py. In the test cell in the notebook, you see one explicit invocation of conv_forward and then it invokes conv_forward_test_1 and conv_forward_test_2 from public_tests.py. If you’ve been paying attention, there are almost always at least two tests for every function starting way back in Course 1: one you can actually see in the notebook and then another that’s invoked from public_tests.py.
Actually looking at your results, my guess is that your stride logic is wrong. Note that you fail the values for the first stride = 2, pad = 1 test case, but seem to pass the others. If you examine the test code in public_tests.py, you’ll find that they don’t actually check anything other than the shapes for the second test case that has stride = 2. The only other one where they test actual output values is a stride = 1 case.
That is by far the most popular mistake to make on both conv_forward and pool_forward. The striding happens in the input space, not the output space, right? It looks like you have your loop limits correct from the loop numbers you show, so it’s most likely that you simply didn’t include the stride in the calculations for vert_start and horiz_start. The other popular way to make this mistake is to include the stride in the loop ranges, but I’m guessing that’s not your mistake from the evidence.
I haven’t submitted it yet but after a lot of printing and reading oddly printed matrices my conv_forward just got all tests passed.
The problem appears to have been how I was selecting weights and biases using the […,:] notation.
I was consistently selecting the 5th of 8 output weights instead of using the correct variable to select appropriate weights.
Thanks for your input on this challenging assignment. Very helpful.
Ian