how to use non-linear regression using sklearn?

What exactly do you mean by â€śnon-linear regressionâ€ť?

Scikitlearn offers a great documentation with really good minimum examples. Check these two out:

- Support Vector Regression (SVR) using linear and non-linear kernels â€” scikit-learn 1.2.2 documentation
- Prediction Intervals for Gradient Boosting Regression â€” scikit-learn 1.2.2 documentation

There are many great other supervised learning models available on scikitlearn which you can explore here: 1. Supervised learning â€” scikit-learn 1.2.2 documentation

In addition if you have a very specific domain function for your regression model in mind you want to fit: Here you can also find a repo where I tried to solve some nonlinear differential equations utilizing probabilistic models and estimate model parameters w/ scipi and Tensorflow (and also Julia for neural ODEs):

Hope that helps!

Best regards

Christian

In addition, this thread might be interesting for you: Can decision tree algo used for regression? - #2 by Christian_Simonis

Please let us know if your question is answered or if anything is open from your perspective, @ASHISH_KUMAR_MISHRA.

Best regards

Christian

as far i know the toolkit discussed in the course from sklearn fits linear model to data. what if i want to fit non-linear model to data using sklearn

â€ślinear modelâ€ť only means that it uses the linear combination of the weights and features (f_wb = w*x + b). It doesnâ€™t refer to the shape that f_wb describes.

If the features themselves have non-linear characteristics, you will get a non-linear f_wb curve. For this to work you either need more than one feature, or you need to create additional non-linear features from the original ones.

Either way, the fitting process is the same. â€śnon-linear regressionâ€ť really isnâ€™t a separate topic.

Youâ€™ll see this later in the course.

Thatâ€™s a very good point: here a concrete example how to describe non-linearity in the features already:

One more point regarding this statement:

I am a huge fan of feature engineering but while it is often necessary, unfortunately it is not always enough to succeed: in reality only encoding non-linearity in features is not always sufficient to capture the full complexity of the relationship between variables.

Therefore in practice, non-linear models still play an important role when linear models w/ manual feature engineering get to their limits. You also see this when taking a look at the yearly publications and patents from top tier universities and enterprises.

One example is the lotka volterra sequence from my previous post where I believe you cannot build a sufficiently accurate solution modelled with a linear model and feature engineering only!

Best regards

Christian

Cool! Did you check on the resources provided? Has your question being answered, @ASHISH_KUMAR_MISHRA?

Best regards

Christian