Hello everyone,
It is a general question but I didn’t know where to post it. I would like to ask the following.
I don’t have much time I started to use TensorFlow but I don’t see all this theory that prof Ng talks about and things seem to be more simple, could we say that we build nn’s from scratch in the assignments? (And all these Dense, Dropout etc layers that we just typing when using tf?)
Yes, it is true that when we use TensorFlow, all the details of what is actually happening in the various functions is hidden from us and we don’t have to worry about it. That makes it a lot quicker to put together a system and that’s the approach that people use for solving “real world” problems. But if you only learn about TF, then you are missing a lot. That is why Prof Ng always uses the approach of first teaching us how the networks of each type actually work by having us build them ourselves in python and numpy. Notice he did that with all the new concepts here in Course 2 like L2 Regularization, Adam Optimization and DropOut for example. Once you have the intuitive understanding of how the various types of layers work and what they can do, then you are better prepared to know how to apply the “canned” functions from TensorFlow. You know what their purpose really is and you also are better prepared to know what to do if your first attempt at putting together a solution for a particular problem doesn’t work as well as you hoped.
If I’ve missed your point, please let me know!
This is exactly the point where I understand the meaning of going through all this stuff by building them from scratch.
Thank you @paulinpaloalto for your response. You answered all my thoughts.