Hi,
I am getting the error below,
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [53], in <cell line: 10>()
7 tmp_Y = (np.random.rand(10, 1) > 0.35).astype(float)
9 # Apply gradient descent
---> 10 tmp_J, tmp_theta = gradientDescent(tmp_X, tmp_Y, np.zeros((3, 1)), 1e-8, 700)
11 print(f"The cost after training is {tmp_J:.8f}.")
12 print(f"The resulting vector of weights is {[round(t, 8) for t in np.squeeze(tmp_theta)]}")
Input In [52], in gradientDescent(x, y, theta, alpha, num_iters)
32 print(J)
33 ### END CODE HERE ###
---> 34 J = float(J)
35 return J, theta
TypeError: only size-1 arrays can be converted to Python scalars
while I calculate the cost as follows;
theta -= (alpha/m) * (np.dot(np.transpose(x),(h-y)))
I did not understand the part num_iters
times we calculated the cost function? Can anyone explain the concept and the error? I am more interested in the mathematical part…
My code calculates J
as follows but how??
[[ 0.05120002 0.05120002 -0.91453146 0.05120002 0.05120002 0.05120002
-0.91453146 -0.91453146 -0.91453146 0.05120002]
[ 0.06712235 0.06712235 -0.71556235 0.06712235 0.06712235 0.06712235
-0.71556235 -0.71556235 -0.71556235 0.06712235]
[ 0.06358555 0.06358555 -0.7539217 0.06358555 0.06358555 0.06358555
-0.7539217 -0.7539217 -0.7539217 0.06358555]
[ 0.06057016 0.06057016 -0.7889786 0.06057016 0.06057016 0.06057016
-0.7889786 -0.7889786 -0.7889786 0.06057016]
[ 0.05286312 0.05286312 -0.8901631 0.05286312 0.05286312 0.05286312
-0.8901631 -0.8901631 -0.8901631 0.05286312]
[ 0.05134407 0.05134407 -0.91238085 0.05134407 0.05134407 0.05134407
-0.91238085 -0.91238085 -0.91238085 0.05134407]
[ 0.05654181 0.05654181 -0.83961344 0.05654181 0.05654181 0.05654181
-0.83961344 -0.83961344 -0.83961344 0.05654181]
[ 0.06360477 0.06360477 -0.7537055 0.06360477 0.06360477 0.06360477
-0.7537055 -0.7537055 -0.7537055 0.06360477]
[ 0.05214667 0.05214667 -0.9005385 0.05214667 0.05214667 0.05214667
-0.9005385 -0.9005385 -0.9005385 0.05214667]
[ 0.06307207 0.06307207 -0.75973174 0.06307207 0.06307207 0.06307207
-0.75973174 -0.75973174 -0.75973174 0.06307207]]