Hey,
My doubt is so far we have coded out every steps of NN implementation from back propagation to gradient descent. In real world, Should we follow this approach or do we have frameworks for automatic implementation ? When should we implement our NN manually completely?
1 Like
It is an important question. In the âreal worldâ pretty much everyone uses frameworks like TensorFlow or PyTorch to implement neural networks. The advantages are that the code in the frameworks is heavily optimized and well tested, so it gives you a state of art solution while saving you the work of building everything yourself from scratch. The state of the art these days is pretty deep, so itâs not a trivial exercise to build everything yourself. Of course if you are a researcher who is trying to advance the start of the art, then youâre on your own for any of the new things you are developing.
But Prof Ng shows us how things actually work for an important reason: as he has discussed at various points throughout the courses, itâs not always cut and dried to come up with a solution to a new problem. It almost always requires tuning and tweaking, perhaps changing big things like the architecture of your network or things like the choice of initialization or optimization algorithms. If you have never seen whatâs happening at the lower levels, you miss out on the intuitions about which direction to go when you have a given type of problem. If you only ever learn things from the point of view of TensorFlow, then itâs all essentially a âblack boxâ to you. It really helps to have some intuition about what is going on âunder the coversâ, e.g. why a certain kind of operation is expensive.
Weâll first be introduced to TensorFlow in Week 3 of Course 2, so stay tuned for that. Then in Course 4, Prof Ng will follow the same pattern: heâll show us how to build a Convolutional Net from the ground up first and then will switch to using TensorFlow to build more complex solutions.
The other general thing to say is that there are lots of frameworks available, so one of the choices you need to make is which one to use. If you are planning a career in ML/DL, it never hurts to know more than one framework. Itâs like programming languages: the more you know, the better. When you work in a company environment, they will probably have made a company-wide choice of framework and it helps if you know several when you are applying for jobs. Here in the Deeplearning.AI courses, TensorFlow is used pretty much everywhere, except in the GANs specialization, which uses PyTorch. One fun way to get exposed to PyTorch is to take the GANs specialization after you finish DLS. The material is really interesting in its own right and you learn PyTorch as a cool side effect. 
4 Likes
@paulinpaloalto mentions some good reasons to prefer frameworks over hand written NN. Another that matters much more in the âreal worldâ than in these classes is built-in support for distributed computing and the associated scalability. Chapeau to the folks who write and test that code, itâs not something I would ever want to take on myself.
First of all, I would like to wave a big thanks to you. Havenât seen much instructors out there typing such long answers for the students. Really appreciate the effort.
Finally, Could you guide me on what course I can talke next post this Specialisation?
Yes I would keep that in mind 
Thereâs no single right answer to a general question like that. It fundamentally depends on what your goals are. I can list some of the possibilities that Iâm aware of from Deeplearning.AI, but there are lots of other courses on Coursera and other platforms.
If you are interested in learning how to apply DL to problems in particular applications domains, e.g. Medical Image analysis, you could take a look at the AI for Medicine specialization.
There are several specializations that delve more deeply into how to use TensorFlow to solve various types of problems. Those are another possibility. If your goal is to apply for jobs, knowing TF well is a good thing. But also be aware that there are other platforms out there and not all companies use TF. PyTorch is another framework that is widely used. Just as with computer languages, knowing more is always better when applying for jobs.
If you are interested in the Data Science area, which is related but really different than ML/DL, you could take a look at the Practical Data Science specialization. In DLS we are just assuming that we already have well curated and labeled data for our problems, but nothing is said about how to obtain or create such datasets. Thatâs what Data Science is about and itâs also a huge and growing field with lots of job opportunities.
Another area that might be of interest is how to create and deploy ML/DL systems âat scaleâ. What does it really mean to run such a system in âproductionâ over time? There are huge issues there and thatâs also a big area for job opportunities as not that many people have those skills. Take a look at the MLOps Specialization and see if that sounds interesting to you.
The GANs specialization is another interesting one to consider. GANs are specialized networks that can be used to synthesize new things (e.g. new images) that are useful for various applications. That technology is really interesting and creative. If youâre just curious to see some pretty surprising things that can be achieved with DL techniques, GANs are definitely worth a look. One other benefit of taking the GANs specialization is that they use PyTorch as their framework instead of TF, so that gives you the chance to learn that.
If you were interested by the material in DLS Course 5 about Sequence Models, you might find the NLP specialization interesting. The early parts of it cover some of the simpler and older methods, but they also âgo deepâ on Sequence Models and Attention Models as applied to NLP problems.
So there are lots of possibilities and it depends how much time you have and what your goals and interests are.
1 Like