I have spent quite some time to figure out why my output does not match the expected one and could not figure out what i might have missed.
I am using the code as given in the instructions:
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs)
Based on the test examples the grader is running (which has Y = [True, False, False]) the sum for the cross-entropy cost function should just be the neg. log of the first example in A2, which when calculated manually matches my output. For some reason the grader expects something slightly off.
Is there something wrong with the expected output or could someone point me in the right direction if I am missing something here.
cost = 0.6926858869721941
cost = 0.6930587610394646
Hi @Thomas_Knoche ,
I dont know exactly what you are doing, but note that you need to use the instructions and apply to the equation described in 13, which means that the final code to write will not be exactly the same that was commented in the example.
thanks for your comment. It seems I misunderstood the instructions and only implemented the part of the cost function that was mentioned in the “Excercise 5” section. It is meant as an helpful example, but I somehow took it as a new function that was supposed to be used instead of (13).
When I fully implemented (13) I got the expected output. What is wierd, however, is that the rest of the assignment did not break (meaning all further tests passed without issue) even when I used the wrong cost function…
Well in any case, thanks for your help! Much appreciated.