C2_W3_Lab_1_Regression_with_Perceptron
Why are we normalizing BOTH features by the mean in section 1. Shouldn’t eatch feature be normalized to it’s own mean and standard deviation for better results? adv_norm = (adv - np.mean(adv))/np.std(adv)

I downloaded the Jupyter Notebook as a Python file into Pycharm.

What the workbook is doing is taking the mean of the WHOLE dataframe. One number, a float, then subtracting BOTH columns by that mean. Since the TV Numbers are an order of magnitude larger than the sales number, the mean is close to the mean for the TV Numbers. And the standard deviation is close to STD Deviation for the TV Numbers.

Proper technique is to take a mean for the tv numbers, subtract that mean from the column ; divide by the std dev for JUST the Tv numbers , and do likewise for the sales number. Get the mean of JUST that column, subtract, and scale by std dev of JUST that column.
Run a scikit-learn normalize and you’ll see that it’s different.

The trick is that np.mean(adv) and np.std(adv) are both vectors, not scalars.
So the normalization is applied in vector form only to the appropriate columns of the data frame.