For those of us who started with the Deep Learning specialization (and found it clear and understandable overall), is there any reason why we should backtrack to the Machine Learning specialization after completing the DLS?
Mainly wondering if the ML specialization spends more time or goes into more depth than the DLS comparing the properties of commonly used activation functions and (even more so) commonly used loss functions.
(I felt the DLS does a good job of explaining on a surface level why a given activation function was used in the output layer of a given programming exercise or NN example, but didn’t really explore the nuances of these topics beyond that-- e.g., are there other loss functions that it might make sense to use in a given situation? What are the characteristics of each option?)
Apologies if this question seems slightly repetitive-- I’ve definitely seen a similar question asked by learners trying to decide which specialization to complete first, but couldn’t find a thread comparing the actual content of each specialization (which to me seems like the most relevant thing to consider when deciding whether or not to backtrack…).