why regression has infinitely many possible outcomes
Do you mean Linear Regression?
Hi @Muhammad_Asif2 when we evaluate a regression problem, we can take any possible numerical outcome, let’s say you want to predict the price of a house in this case the possible price of the house can be between 0 to infinite, however we can also assume that they are infinite given the nature of the possible outcome which is a continuous outcome, in other words
20.8531234 is a valid prediction and different of
20.8531235, this is the opposite of a classification problem when the outcomes are finite, for instance if you predict that the price of the house will be higher next year that can be either true or false, but there are nothing between those two outcomes.
Regression: Infinite number of outcomes (continuous)
- Item price: 1.23 - 1.24 - 5.32 (Decimal are possible)
- Revenue: 1.1 Million - 1.2 Million - 12.23 Million
Classification: Finite number of outcomes (Discrete)
- Higher price next year: True - False
- Pet: Cat - Dog
Let me know if this answer your question
Yes I mean regression
Thanks , this answer is very helpful for me .
Can you explain cost function and gradient descent
I suggest you take a look at the course content again since the underlying concepts are explained really well. And they are important to understand since several future concepts will build upon these basics.
A cost function provides a metric you can use to improve your optimization. When fitting a model you actually want to minimise the costs, (the model error). See also:
Gradient descent is a powerful method for above mentioned optimization when fitting your model. It is really well explained by Andrew Ng in this video:
Also this thread might be interesting for you:
- How different initialization of centroids of K-means results in drastic different clusters ? They all share common cost function - #5 by Christian_Simonis
can you explain classification problems?
can you explain classification problems in supervised learning?
if you have categorical data which you can use as labels: then you can formulate a classification problem and use supervised learning:
supervised learning means you provide examples / observations via labelled data to the algorithm so that the model can learn the classification „border“ (of course this can be high dimensional and non-linear) on its own: https://static.javatpoint.com/tutorial/machine-learning/images/classification-algorithm-in-machine-learning.png
A possible application would be the detection whether the picture shows a cat or a dog (…) and the algorithm classifies the image accordingly.
Labelled data would mean in this example that you have pictures with the labels (=where you know for sure that a dog or cat is on the pic) and you can use them for training.
Is there any specific part or a course-related question that you are interested in?