I have completed 3 weeks of the Neural Networks and Deep Learning Course and am now eager to implement my learning in real projects. I’m curious and passionate about working with real-world data. Could anyone guide me on how to apply my knowledge to a small or medium project that incorporates ideas from the lectures we’ve covered so far
You might try going to Kaggle and download some of their datasets They also provide a Juypter programming environment.
As I have covered deep understanding of how NN works under the hood
Is it time for practicing on a project?
And I have not used tensorflows or pytorach yet so that I can implement on real data
but i tried MNIST DIGIT data from tensorflow keras using 1 hidden layer of 128 units i used ReLU activation and sparse_cross_entropy for loss calculation
and its accuracy is 97.8
Here is Kaggle Link
The MNIST digit recognition is a very nice project to apply the knowledge you’ve gained so far. 97.8% accuracy is a good result with one hidden layer. And you’ve also learned about softmax, which won’t be covered in DLS until Course 2. Nice work!
@paulinpaloalto @TMosh
Thanks
I would like to ask that what else and bit hard challenging project in your mind ?
I have completed First Course of Deep Learning Specialization.
Perhaps complete the 2nd and 3rd courses in DLS, before you dive into further projects.
That is Good idea. I will do so
@TMosh
@paulinpaloalto
Sir, This question just hit my brain: What is the purpose of such a deep dive into neural networks (in course 1 , and 2 ) (like we are building NN via math equations ) when we have libraries like TensorFlow and PyTorch?
I’m not emphasizing my comment. I’m genuinely curious and would like some clarity on how this deep dive into the math and fundamentals helps build better intuition.
I have not really heard Professor Ng discuss the methodology behind the way he presents the information in the DLS courses, but my interpretation is that the point is exactly to build better intuition. You’re right that everyone these days uses an ML platform like TF or PyTorch or one of the other various choices to implement actual solutions, but there are still lots of decisions you as the system designer need to make: how do you decide which type of algorithm is appropriate for the particular problem you need to solve? What do you do if your first try at a solution does not give you sufficiently good results? What are the possible causes of that situation and possible paths to solving it? If you just treat all the algorithms as “black boxes” and don’t really know anything about what is going in “underneath the covers”, then I think that would limit our abilities to really solve problems as well as we can if we have the deeper intuitions that we get from learning what is going on in each type of algorithm as we do from Professor Ng’s presentations.
Another motivation might be that the hope is that some of the audience may include people who actually want to advance the field by creating new techniques or algorithms. Knowing how the existing technology works and something of the history and paths that were taken to get where we are is useful background for people with that mindset.
Thank you for the thoughtful response — that actually clears up a lot!
I now see that while TensorFlow/PyTorch handle the implementation, it’s our understanding of the core principles (like cost functions, gradients, activations, and regularization) that lets us properly design, tweak, and debug models.
Without that foundation, it’d be like trying to tune an engine without knowing how it works.
Also, your point about contributing to future advancements really hit, if we want to build beyond what is already available, then we have to understand what’s under the hood.
Appreciate your perspective!