C1_W2_Lab04_FeatEng_PolyReg_Soln Understanding Model Parameters

Hi everyone,

I’m working on the C1_W2_Lab04_FeatEng_PolyReg_Soln lab and trying to understand how the model parameters are learned using gradient descent. In the lab, we define the target variable as
y=xx and use polynomial features
x, (x
x),(xxx) as inputs.
After running gradient descent, the model finds parameters:
w: [32.12 , 40.67 , 42.27], b: 123.4967
These values looking different from what I’d expect for a perfect
y=(xx) fit. For example for x=10 the model predict :
y =32.12 (x) + 40.67 (x
x) + 42.27(xxx) + 123.4967
y = 46,781.6967
but the actual value must be 100!

My main questions are:

  1. Why do the learned parameters look so different from the intuitive coefficients of y=x*x?
  2. How does normalization affect the interpretation of these parameters?
  3. Could these results be due to an implicit scaling factor introduced during normalization?

I’d love to hear insights from the community. Thanks in advance!

Your two sets of weights appear to be exactly the same. So I don’t understand your question.