While computing Jcostfunction: "Runtime warning encountered in log"


Screenshot from 2022-09-28 13-32-11
Patage, tumorsize are my input features.
Diagnosislabel:my output label
Preds are linear regression values which go as inputs into sigmoid.
Sigout are final sigmoid function outputs of Preds values.

finally, Part of the code (causing eror) implemented in a function which returns Jcostfunction across all samples (here 10)

Any help appreciated.

Hello @tennis_geek,

This assignment doesn’t look familiar to me. Which course or which specialization is it from?

Raymond

ML specialization Course 1 Week 3.
This is my independent parallel implementation of Optional labs discussed.

I think this requires a careful numerical debugging process taking numbers one by one and why error is raised while calculating log.

I see. Can you also share the whole Error Traceback?

Rough weather & a power cut now. Will resume posting with traceback call soon.

Okay! Thanks for letting me know. I will come back later.


Hi
reuploading the screenshot for the Runtime warning I got. There was no such ‘Traceback call’ raised by the interpreter.

which means it is not an error, but the output isn’t what you are expecting for. In this way, I would only recommend you to print the variables out and check one by one to see which one is not normal. For example, in this part of the line:

Screenshot from 2022-09-29 11-38-16

You may want to do something like below inside the loop

print(eachrow, lossfunction) #0th
print(knowndiagnosis[eachrow]) #1st
print(sigmoidoutput[eachrow]) #2nd
print(np.log(sigmoidoutput[eachrow])) #3rd
print(knowndiagnosis[eachrow] * np.log(sigmoidoutput[eachrow]) #4th
print(.....) #
print(.....) #
.....

Note that I won’t be only doing the 2nd line of prints, but also the 3rd line which only additionally uses np.log there. Similarly, instead of only doing the 1st and the 3rd, I am also doing the 4th which simply multiply the 1st and the 3rd together.

Hope you will see something unexpected to pin down the source of the problem.

Raymond

Thanks have to do some digging for sure. did some quick calculation using libre calc and got same sort of error in it. Uploading the screenshot.
my suspicion… coding bug …


will update the progress. Thanks.

1 Like

Hi
after careful checking of runtime warning inside the loop where loss function is being computed:
I noticed this discrepancy:
for a knownoutputlabel which is 0, then loss function shall be: *- (1-knowndiagnosis[eachrow])np.log(1-sigmoidoutput[eachrow]))
I further broke down this formula and after some check iterations, I narrowed it down to this scenario whenever knowndiagnosislabel is 0, I get the runtime warning.

  1. Corresponding Value of (f_wb_i) sigmoidoutput[eachrow] is 1.0
  2. Next checked for part of simplified loss function when outputlabel =0; which is -1*log(1-f_wb_i) 3. np.log(1-1.0) is throwing a runtime error as ‘inf’.
    Mathematically, it makes sense what is causing this runtime error here.
    ![Screenshot from 2022-09-29 11-04-05|456x74]
    (upload://4dWcCuLtmAMx3jcmQnkmd2mDdOR.png)

@rmwkwok Finished part of my ML C1 W3 assignment; up until Regularized logistic regression. I implemented the code successfully on the 1st go it self. Now this baffles me more, b’coz I finished the assignment from lines in my local program which was giving these run time warnings.

As you said, np.log(0) should get you some unexpected result. As for the assignment, maybe it never touches 0 in the log? You can verify this by, again, printing anything that’s been put inside np.log.

By the way, this post talks about how we usually avoid np.log(0).

@rmwkwok So, in a way controlling number of significant figures, that go into log calculation might help in avoiding this runtime error? (with ref to:np.clip function)

well, yes if you use np.clip, then you will make sure np.log never sees a zero.

1 Like

Will keep you posted! Thanks much again for the inputs!

1 Like

np.clip() solves the issue! Cheers