C1_W3_Lab06_Gradient_Descent_Soln
and
C1_W3_Lab07_Scikit_Learn_Soln
has the same X_train and y_train set
But for SKLearn version, if we output
print(“weights:”, lr_model.coef_)
We will see the weights are very different from manual impl version.
– lab06 output:
Iteration 0: Cost 0.684610468560574
Iteration 1000: Cost 0.1590977666870456
Iteration 2000: Cost 0.08460064176930081
Iteration 3000: Cost 0.05705327279402531
Iteration 4000: Cost 0.042907594216820076
Iteration 5000: Cost 0.034338477298845684
Iteration 6000: Cost 0.028603798022120097
Iteration 7000: Cost 0.024501569608793
Iteration 8000: Cost 0.02142370332569295
Iteration 9000: Cost 0.019030137124109114
updated parameters: w:[5.28 5.08], b:-14.222409982019837
Lab07 output for the same data
weights: [[0.90411349 0.73587543]]
Do we know why? Or did SKLearn internally doing some feature scaling already?
And also, how to output the ‘b’ from SKLearn result?
Thanks!