K-nearest Neighbors and Decision tree algorithms this is part of DL , but
This is not linear, right?
This is Neural Network?
Because for now, i understood, that we have linear regression or deep Neural Network, where we have parameters as number of hidden layers, or numbers of neurons?
But in the K-nearest Neighbors algorithm our parameters - initial dataset_number of neighbours
And in the Decision tree algorithm our parameters - number of leaves in the tree , so how this 2 algorithms related to DL?
I do not know anything about Decision Trees, but a quick google tells me it is a form of Supervised Learning algorithm. If K-nearest Neighbors is the same as “K-means”, that was covered in Prof Ng’s original Stanford Machine Learning course. Those other algorithms are Machine Learning algorithms, but not Deep Learning algorithms. “Deep Learning” means Neural Networks. There is a “containment” relationship that goes like this:
Artificial Intelligence is a super set which contains (among other things):
Machine Learning contains:
Supervised Learning Algorithms (with labelled data) includes:
Neural Networks including Deep Learning
Logistic Regression
Linear Regression
Decision Trees
Recommender Systems
Many more ...
Unsupervised Learning Algorithms (don't require labelled data)
K-means
....
The point is that Machine Learning covers many different types of algorithms besides the Deep Learning algorithms that we are learning about in DLS.
“K-nearest Neighbors” (KNN) and “K-means” are 2 completely different algorithm categories. K-NN is a Supervised machine learning while K-means is an unsupervised machine learning. K-NN is a classification or regression machine learning algorithm while K-means is a clustering machine learning algorithm. K-NN is a lazy learner while K-Means is an eager learner.
You may have misunderstood my comment. The sole goal of my comment was to compare the two as it may be relevant to anyone looking into this thread. The quote selected was made rather at random.