Hello, I am currently doing the DLS but I feel like I learn nothing from hardcoding everything since the bases are already given so you don’t really think and learn. I don’t understand even though I listen to all the lectures and undertsand them.
I the real world applications I would just use Tensor flow or Pytorch so I am very confused why we are hardcoding everything.
Please help me.
Thanks !
@DariusG I wouldn’t really say we are ‘hard coding’-- But otherwise you’d have no idea what Pytorch/TF is doing ‘under the hood’ and I figure that is important to know.
Hi @DariusG ,
Coursera has now incorporated a study buddy in DLS, an AI trained coach specifically equipped with the DLS content to help students along on their learning journey. If you have any questions about the lectures, coding assignment, graded quizzes or programming in general, you can ask the coach, just write your question in ordinary text, and see if the coach can help your understand.
Often, we learn by see how it is done. The code in the assignment is one such example guiding you how to write such function, understand what it does, and how it is done. Although it might look like you are just filling in the blanks, but as you go through the code and try to understand what is involved, you will know how to deal with similar problem. For example, how to unroll images.
Learning new concept takes time. I hope you will find the AI coach helpful in your learning journey.
I agree with Anthony that we aren’t really “hard-coding” anything in the DLS programming exercises. They do try to help you out by writing most of the “template” python code for you, so that you only have to fill in the parts that are the key functional sections of all the various algorithms that we are learning about here in DLS. My interpretation of this is that Prof Ng is using this pedagogical method for a specific reason: he wants us to understand how the core algorithms work. And the best way to achieve that is to write them in python. You are correct that in “real” applications, no-one writes all the algorithms in python: we will soon learn to use TensorFlow (and PyTorch would also be a perfectly valid approach as well) to implement full solutions. But the limitation of skipping Prof Ng’s method and going straight to TF is that you lose a lot of the intuition about what is actually happening. If we only learn the TF APIs and treat everything as a big “black box”, meaning that we don’t know what is going on “under the covers”, then we don’t have as good intuitions about what to do when our first try at solving some problem with a particular architecture doesn’t work as well as we’d like.