NLP C1W1_UNQ_C2-Math Operators

Hi,

I am getting the error below,

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Input In [53], in <cell line: 10>()
      7 tmp_Y = (np.random.rand(10, 1) > 0.35).astype(float)
      9 # Apply gradient descent
---> 10 tmp_J, tmp_theta = gradientDescent(tmp_X, tmp_Y, np.zeros((3, 1)), 1e-8, 700)
     11 print(f"The cost after training is {tmp_J:.8f}.")
     12 print(f"The resulting vector of weights is {[round(t, 8) for t in np.squeeze(tmp_theta)]}")

Input In [52], in gradientDescent(x, y, theta, alpha, num_iters)
     32 print(J)   
     33 ### END CODE HERE ###
---> 34 J = float(J)
     35 return J, theta

TypeError: only size-1 arrays can be converted to Python scalars

while I calculate the cost as follows;

theta -= (alpha/m) * (np.dot(np.transpose(x),(h-y)))

I did not understand the part num_iters times we calculated the cost function? Can anyone explain the concept and the error? I am more interested in the mathematical part…

My code calculates J as follows but how??

[[ 0.05120002  0.05120002 -0.91453146  0.05120002  0.05120002  0.05120002
  -0.91453146 -0.91453146 -0.91453146  0.05120002]
 [ 0.06712235  0.06712235 -0.71556235  0.06712235  0.06712235  0.06712235
  -0.71556235 -0.71556235 -0.71556235  0.06712235]
 [ 0.06358555  0.06358555 -0.7539217   0.06358555  0.06358555  0.06358555
  -0.7539217  -0.7539217  -0.7539217   0.06358555]
 [ 0.06057016  0.06057016 -0.7889786   0.06057016  0.06057016  0.06057016
  -0.7889786  -0.7889786  -0.7889786   0.06057016]
 [ 0.05286312  0.05286312 -0.8901631   0.05286312  0.05286312  0.05286312
  -0.8901631  -0.8901631  -0.8901631   0.05286312]
 [ 0.05134407  0.05134407 -0.91238085  0.05134407  0.05134407  0.05134407
  -0.91238085 -0.91238085 -0.91238085  0.05134407]
 [ 0.05654181  0.05654181 -0.83961344  0.05654181  0.05654181  0.05654181
  -0.83961344 -0.83961344 -0.83961344  0.05654181]
 [ 0.06360477  0.06360477 -0.7537055   0.06360477  0.06360477  0.06360477
  -0.7537055  -0.7537055  -0.7537055   0.06360477]
 [ 0.05214667  0.05214667 -0.9005385   0.05214667  0.05214667  0.05214667
  -0.9005385  -0.9005385  -0.9005385   0.05214667]
 [ 0.06307207  0.06307207 -0.75973174  0.06307207  0.06307207  0.06307207
  -0.75973174 -0.75973174 -0.75973174  0.06307207]]

All right I should be more careful about the matrix operators. When I change all the multiplications np.dot() with @. Everything works clearly. Still I get the following error in the test phase.

Wrong output for the loss function. Check how you are implementing the matrix multiplications. 
	Expected: 0.6709497038162118.
	Got: -2.901607345226025.
Wrong output for the loss function. Check how you are implementing the matrix multiplications. 
	Expected: 6.5044107216556135.
	Got: 6.349765160772504.
 6  Tests passed
 2  Tests failed
1 Like

Hi @Ahmet_Kasim_Erbay,

Your J is correct, but also incorrect. I mean, the way you have written it, everything is right, but you have one too many brackets, which are causing the calculations to not end up the way they should be.

Hope this hint helps,
Mubsi

Also, I’d ask you to avoid using operations like -= or += etc. I fear they might cause you headache when your assignment will go for grading to the autograder, it might give you errors which will not make sense. Best to avoid this and write them fully.

Hi @Mubsi,

I have rearranged my codes according to your suggestions. Still the same errors come up…

Hi @Ahmet_Kasim_Erbay,

What you are currently doing is X * Y + Z, instead do (X) * (Y + Z)

Thank you very much @Mubsi, my issue has been resolved.