For those of you who may be wondering how the Machine Learning Specialization differs from the original Machine Learning Course, I’d like to share some changes in how the math is introduced and explain why we changed it.
When I was discussing the design of the new courses with Andrew Ng and the team, Andrew was seeking to make ML more accessible to a wider audience, and asked how much we could get learners to a practical understanding of ML without requiring certain math concepts. On the other hand, members of my team felt strongly that learners needed to understand the math. I also recounted how Andrew’s courses are known for “really teaching the math.”
What we ended up with is what I’d call “just in time math.” Whereas the original course had a “linear algebra review” section that introduced all the required math “up front,” I looked for places where the learner could gain the ML intuition and start practicing the code without requiring certain math concepts. I also looked for places where we should introduce the relevant math, because it was needed in order to move forward in the ML learning journey.
The Machine Learning Specialization is designed for a beginner level audience to gain the skills, intuition and confidence so that learners can build ML applications and keep diving deeper into the field. For those who want to see the math, don’t worry … it’s still there, but now it’s placed “just in time”, right before it’s needed.
Hi @Jaggerna! We tried to make sure any math you need to know is explained before you use it. There isn’t much you need to know about probability except that it’s a number that ranges from 0 to 1.
hello @eddy,
Is “Mathematics for Machine Learning Book by A. Aldo Faisal, Cheng Soon Ong, and Marc Peter Deisenroth” good to know about the math behind ML
Hi @dparekh123 , if you’d like to learn about the calculus rules used for the linear regression gradient descent, you can watch my lecture videos on the derivatives for “logistic regression” (the week 3 topic), which explains some of those general derivative rules that are applied in both regression and logistic regression. I’ve posted a discourse message linking to the code notebook and videos here:
Hi @Sanak , I am not familiar with that book, but I think any book would be fine. I personally feel like it’s okay for you to spend more time going through the machine learning course content rather than feeling like you have to first learn all the math before getting started on the machine learning concepts.
For the calculus rules that are relevant to the linear regression (weeks 1 and 2) and logistic regression (week 3) models, you can take a look at my lecture videos on logistic regression. I posted a link in discourse here:
@eddy Hello, I am learning the “gradient descent” part, I didn’t understand the maths. Is it really ok to learn machine learning with unfamiliar with math? I am afraid about this point. Thanks
Hi @kaian0414 , thanks for the question. The short answer, in my personal experience, is yes. I think it’s more helpful for you to see more of the course and get a bigger picture of machine learning instead of feeling like you have to first study linear algebra and calculus before learning machine learning.
As an an analogy, if I wanted to learn how to draw a frog, maybe when I drew it the first time, I didn’t draw the eyes quite right. One thing I might try to do is to stop there and keep trying to draw the eyes over and over before I continue to draw the rest of the frog.
Another approach I could take is to just keep drawing the whole frog. When I’m done, I can still go back and draw another whole frog again. Maybe drawing the frog’s face will actually help me learn how to draw the eyes better.
Same with machine learning. I could stop taking the machine learning course and try to start another course on linear algebra or buy a book on derivatives. Another approach is to keep moving forward with the machine learning course. The math details are always going to be something you can revisit and re-learn while you learn more machine learning concepts.
Later, when you get to course 2, I think it will be neat for you to see how the same concepts from course 1 apply to neural networks.
I think I’ll try to write a separate post to give some non-math analogies about gradient descent and derivatives. I’ll update this thread with the link to that post after I write it.
What a great example!! This is a general problem many students face. Your message inspire me to keep going and apply this kind of approach to another subjects including to my life
it is really great thing, i was afraid about reading all that math stuffs and I founded boring, but now I am very exciting about learning this specialization .