Feedforward Neural Networks in Depth

This is great! I especially liked the subscripts j and k as a way of keeping track of what’s going on.

The suggested 10 minutes reading time on the course website is a little misleading. I have no background in calculus and I gave it a day - with ChatGPT as a tutor. I think we got somewhere, but I’m going to have to come back to it a few times.

1 Like

Hi Jonas. Thanks so much for these three articles “Feedforward Neural Networks in Depth”. They are really great except the pixelated quality of the equations. I assume you wrote these in LaTeX and converted to HTML? Do you have the original .tex files or the PDF files to share?

Thanks for the resource for the further study, I have a question in part 2


Why the partial derivative for u3 equal to this?

A well-explained and detailed breakdown of forward and backward propagation! Appreciate you sharing this with us.

Hello, @Li_Tang,

You might want to check out a topic called “total derivative” first, and with that in mind, see if the underlined equation is complete and correct.

Cheers,
Raymond

Late to the party, but if I understand what you’re saying correctly (still at Article 1), I think it would perhaps make things less confusing if, in Figures 1 and 2, multiplying by the W weights just leads to Z nodes instead of A nodes.

And then if you’d like you can add an extra layer to the right where we have A nodes, where each A node takes all Z nodes.

I know it’s unorthodox to separate Zs and As into different layers, but as it is currently, there isn’t an a clear implication that each A node is made up of all the Zs in its same layer.

I hope my description isn’t too confusing, and there could perhaps be better solutions, maybe a simple note beneath the figures, I don’t know. But all in all, the content is great, thank you!

Edit: when you mention the NumPy documentation, are you just referring to the regular one on the NumPy website or is there like a PDF made for this course?

There is no special PDF of the Numpy documentation for these courses, so you need to consult the numpy website. One other important thing to note is that these courses typically use the versions of all software packages (python, numpy, TF, all libraries) that were current at the time the courses were last updated, which is April 2021 for most of the courses in DLS. The various software packages continue to evolve, so the documentation on the numpy website (for example) will show the current state of play and thus may have features that were not available when the courses were published.

Also note that checking Jonas’s forum activity, he last posted anything in 2022, so you probably will not get any answers from him at least here in this context.

1 Like

I see, thanks for letting me know!

Thank you for writing the blog, Jonas!

After watching the course videos, I was looking for an in-depth derivation and explanation of the forward and backward propagation, and this provides exactly that without wandering around much. I’m sure this will help many other students like me. Highly appreciate the effort put into writing this and providing the information for free!

1 Like

splendid explanation! Thanks for your clarification. I couldn’t be more clear about those basic concepts in neural networks.