Tensor Flow Exercise

I’ve completed the Course 2 week 3 exercise on Tensorflow. Although I feel fairly confident with writing a numpy multi-layered NN from previous lectures, I feel a bit overwhelmed with the tensor flow. The exercise was simple enough, I read the recommended documentation but I am no where near confident to write my own NN with TF. Moving forward to Courses 3-5, will we get more on TF? or should we look for additional courses? As this seems TF dramatically simplifies coding process and is something I probably need to have a better understanding. Thanks

Tensorflow is used for a good chunk of exercises. Most of what you need should be explained in the lectures. Given that there are a lot of online resources, you should be ok. There are some parts that involve you implementing calculus functions. But, they’re optional.

Here are some resources to help you out.

You will see plenty more examples of the use of various TF/Keras constructs in Courses 4 and 5. Whether the amount of explanation they give will be enough to make you feel confident is of course another issue. I think a lot of students feel the same thing you are expressing even after trying the more advanced assignments. There are some threads here on Discourse that give additional explanations, e.g. this one (it’s a more advanced topic so you won’t recognize it based on the first exercise, but bookmark it for when you get to Course 4).

There are also a couple of specializations here from Deeplearning.AI that are specific to TensorFlow. I have not explored those, but they sound like a good place to look for more understanding of applying TF to different classes of problems,

I completed the deeplearning.ai TensorFlow Developer specialization in 2019 and the TensorFlow Advanced Techniques specialization last year. I feel there is some benefit to working with the language/platform that will make courses in this specialization that use TensorFlow and Keras a little easier, but for the most part I see those two sets of classes as providing mostly more breadth, rather than more depth. By that I mean you get exposed to more of the TF and Keras APIs, but don’t spend much time at all going deeper on the foundations of the language. In my opinion there is still an information gap around the details of how the key abstractions of graph, model, optimizers, loss functions, data and learned parameters interact, especially in a distributed computing environment. This is partly because the other classes, being newer, use TF 2.x which does a better job of hiding some of those details than did TF 1.x

I guess net-net my experience is that there is definitely a learning curve in this specialization to get past the TF 1.x syntax for sessions, placeholder and feed_dict (for example), but once you do there isn’t much more TF you need to learn to complete the specialization. And since the other specializations use 2.x which does away with all of those concepts, they won’t be much help for the exercises in the CNN or Sequences classes. YMMV

Yes, the TF2 interface is so much easier to use. Note that a big part of the recent April 2021 update to all the DLS courses was specifically to upgrade from TF1 to TF2. So I think we’re all on the TF2 bandwagon now and no more need to try to come to grips with sessions and all that.

Tensorflow courses offered by google will be useful once you understand the basics. See if you like https://www.coursera.org/professional-certificates/tensorflow-in-practice#courses . It gives you a broad feel of the tensorflow APIs without the depth of DLS specialization.