please explain

what is gradient descent for multiple variable linear regression?

First multiple variable linear regression look like that

there are difference between polynomial regression and multiple variable as the polynomial is the a techniques of reduce underfit and to reach to best fit (Although polynomial can be essentially a linear regression for 1 variable) but multivariable is a regression with more than one independent variable

Gradient descent for multiple variable linear regression it is look like that

as when you doing gradient descent for multiple variable you looking for and tuning the parameters(Thetas & b(y intercept)) for every feature(Variables) to reach out the best parameters(Thetas & b(y intercept)) that when multiply with features(Variables) you got the best fit and not to suffer from either overfit or underfit

please feel free to ask any questions,

Thanks,

Abdelrahman

It’s the same as gradient descent for one variable (i.e. feature), except there are weights and gradients for each feature.

why do we put x0 =1 I can not understand. please explain that

Because when theta matrix is multiplied with X matrix, then x_0*theta_0 will be equal to theta_0 as x_0 is one.

Here theta_0 is the bias term, that is discussed in the lecture.

in addition to what ritik5 said if we didn’t include x0 =1 we will not be able to get h(x) as the is matrix multiplication x (1 , 4 ) so theta should be (1,5) as the 5 = thetas + b(bias) so if we multiply that it would rise an error so we include an additional element called x0 = 1 to satisfy matrix multiplication rules

matrix multiplication rules

please feel free to ask any questions,

Thanks,

Abdelrahman

thanks, bro

what is bias

what is the difference between linear regression and logistic regression

HI

here I mean that the bias is y-intercept but it isn’t correct so much as the bias in Machine Learning(AI) is Difference between the expected (or average) prediction value and the

correct value we are trying to predict. …this picture will also help you

and if the model suffer from high bias it mean that the model is so bad and it can’t to fit the data

Hello @Muhammad_Asif2,

here a link to a thread where you asked a very similar question on gradient descent: Supervised learning - #6 by Muhammad_Asif2 The replies are also relevant here!

For a very good bias explanation, please check this video from Prof. Ng: Lecture 10.4 — Advice For Applying Machine Learning | Diagnosing Bias Vs Variance — [Andrew Ng] - YouTube

It’s important that these basics are well understood since future course material will build on it. Therefore do not hesitate to ask your questions after watching the course videos and doing the necessary course work.

Best

Christian

HI

there are too many thing about the difference between linear regression and logistic regression

but the main concept is logistic regression try to predict discrete values(mean that 0 , 1 , 2, 3 …) which mean is the target can be one of multicategories ,and how to choose the best function like for example sigmoid or softmax to predict the correct answer

in linear regression is try to predict continues values(mean that 0 , 1.5 , 100.6 ,253.2… ) which

mean is one of the infinite range , and how to choose the best function like for example relu to predict the correct answer

how to find the decision boundary in logistic regression

This thread addresses exactly this question:

Summary: after training your model, you can dimension your threshold based in relevant metrics, e.g. false positive rate and false negative rate.

Hope that helps!

Best

Christian