Hi! I enrolled in the DeepLearning Specialization, and I’m on week two. I’m finding it informative and will finish the course to get as much exposure as I can, but I would like to find another course that is more geared towards regression analysis vs classification as the main focus of the course. For example where the Loss function is the Mean Squared Error (or other appropriate function) rather than the Cross Entropy function. As a thought experiment and teaching the fundamentals this course is somewhat useful, but since it completely overlooks my specialization (as a physicist and director of research for a company that uses engineering functions), it is not as didactic as I would like. That being said are there other treatments of deep learning for regression analysis out there that any of you know of? On Coursera or other… At the moment ChatGPT is proving to be my best teacher….
The Machine Learning Specialization has more examples of linear regression. Typically students will attend that before moving on to DLS.
And you can likely find a lot of them for practice in the datasets area of Kaggle.
Thanks for your input @TMosh. Though it is much appreciated I think I have not correctly represented my need: I have spent the last 6 months developing a machine learning algorithm for my regression analysis and it has become apparent that I need at least 2 hidden layers, and the long-term scope of this project will potentially have many more layers than that. As well, my model is not linear, and although I am currently employing R-Squared as a placeholder metric, I will be needing to use more advanced metrics for evaluating goodness of fit for nonlinear models. I am specifically looking for a Deep Learning treatment of nonlinear regression analysis with multiple parameters, and for which multiple mathematical models will be used in sequence for fitting past data and generating forecasts for future data. I’m wondering if and where I might find such a treatment? Or if I need to invent it myself… My current program is in C#, but I can program in Python. C# is preferable though for several reasons, so I am also looking for a treatment of NNs using C#. Kind regards, Alix
Thanks for the clarification.
A NN with a “regression” output is just a regular output layer with no activation function. You can have multiple outputs if necessary.
“linear regression” doesn’t refer to the shape of the model - it refers to using the linear combination of the features and weights.
So, I have enrolled in the Machine Learning Specialization and left the DeepLearning for later, as recommended, but as I suspected this is really geared towards linear regression and optimization of features, whereas I am optimizing the parameters of a nonlinear function. According to ChatGPT the function I am using “scipy.optimize.differential_evolution” is a better fit for my needs than gradient descent, as they are two very different situations. Furthermore, I will have multiple layers of selection of the best curve, and it will need to make decisions on the best curve to pick based on a variety of criteria, which is why I originally enrolled in the DeepLearning specialization. I will continue with the ML specialization to make sure all my bases are covered, but again, I am wondering if there are other courses available that are less geared towards the linear “Features and weights” approach, which is decidedly not directly relevant to my situation. I do need to apply weights to the criteria of the NN in order for decisions to be made in a stochastic manner, but not in the way the material is presented in these courses. I am not sure if I am being clear here. Please ask questions if more clarification is needed. But I am wondering if there is a different course more suitable for my needs?
Do not be confused as to what “linear” means. And do not rely on chat-bots to do any more than generate pleasant-sounding sentences.
“Linear” doesn’t describe the shape of the model. It refers to the weight minimization being performed on linear combinations of the weights and features.
If the features themselves have a non-linear relationship, then so will the model.
Yes, in any case, the differential_evolution algorithm is giving me excellent results, and according to extensive research is a better optimization algorithm for my parameters than gradient descent, especially since the calculation of differentials is very difficult for my function. Differential_evolution uses population based approach, rather than relying on gradients. Is there any treatment of different optimization algorithms than just gradient descent? I know there are several out there, and I would like a treatment of the different approaches. That being said, that part of my algorithm is over, and I am looking for a treatment of weighting different nodes of a neural network in order to select the best curve, which is not by any means the best fit of the curve, as it is selected based on multiple criteria, defined in the nodes. Is there any treatment of this?