This course is leaving a lot questions for me unanswered

Hi,
can anyone recommend a course which goes a bit more into the details? I am in week 3 now, but there are really a lot of questions unanswered for me, for example:

  • What is a tensor and what is its structure?
  • What is the relationship between Keras and Tensorflow?
  • How can a siamese Network be trained with 2 loss functions when it only has one set of weights?
  • What are the requirements for custom loss functions (I tried an if/else statement, but this failed)?
  • When will the different methods of custom layers be called (I was very surprised that build() is called without compile(), only when the code failed I found out from this forum, that it can be also be called by call()?
  • How does Backprop work with Tensorflow?

These are just examples, but I am afraid, there will be more when the course progresses, and I did actual hoped that these details can be found in this course.

Maybe somebody knows another good resource?
Thanks! Mirko

Hello,

You have good questions no doubt, some of them can be unswered with the courses that are offered here, some others could be found elsewhere. No course will ever quech all the questions, in science in general 1 answer gives rise to a multiple of other questions.

I would advise to continue searching wherever you can to find answers if you feel so, the deeplearning courses have a lot of material. I have gone through almost all of them and I have gathered considerable knoweledge.

1 Like

I’m not a mentor for the TF courses, but we do learn about TF in DLS. In addition to Gent’s excellent advice, note that the TensorFlow documentation and online tutorials are quite good. A lot of the questions you list above can be answered just by spending a little time getting familiar with the TF documentation.

A tensor is just a generalization of the notion of a multidimensional array. It can have arbitrarily many dimensions. See the TF docs for more.

Keras was originally a separate set of APIs built on top of TF, but they decided that it made more sense to take the set theoretic union of the two. Francis Chollet (the creator of Keras) is a prolific blogger and you can find plenty of information about Keras e.g. starting here and here.

TF autocomputes gradients for you and then applies them under API control. This is also covered in the TF docs, e.g. here. Note that I found that by going to any TF doc page and typing “gradient” in the search box.

So the bottom line here is that “Google is your friend” in more ways than one: they created TF in the first place and it’s easy to “google up” references for most questions related to TF.

For questions like this, I think the problem is that the DLAI TF courses assume you have already taken DLS or some other course like that which introduces you to most of the different types of neural nets and types of solutions (Feed Forward, ConvNets, RNNs, Siamese Networks and so forth). They don’t explain to you what a ConvNet is or what types of problems they are useful for: they assume you already know that and then just show you how to build such solutions using TF. So if you went straight to the TF specializations here and skipped DLS, that may not be the optimal strategy. E.g. in this particular instance, you can learn about Siamese Networks by looking at the Face Recognition section of DLS C4 W4 and there are some related ideas around multiple different loss functions in the Neural Style Transfer section (also DLS C4 W4) and (a bit more obliquely) in the Object Recognition and YOLO section of DLS C4 W3.

As an experiment, you could try just signing up for DLS as an auditor and then just listen to Prof Ng’s lectures about Face Recognition in C4 W4 and see if that’s enough to help you with that specific question without actually working the assignments (which of course costs both $$$ and more time). It’s been a while since I watched those lectures, but I’m sure Prof Ng covers the points about how the network is structured and how the loss functions work.

1 Like