Multi-Layer Perceptrons in Depth, Part 1: Forward and Backward Propagations

Thanks, Jonas. I’ve only skimmed it so far, but it looks like a work of art! Looking forward to really grokking the details.

I hope you will enjoy reading it. My reflection is that many learners of the Deep Learning Specialization have been missing a thorough derivation of multi-layer perceptrons where everything is very explicit. Therefore, I want to inspire with this three-part series so that learners dare to take on another derivation they might have questions about, by themselves. My goal is to complete the remaining two posts covering activation and cost functions before the new year.

I have now published part two and three as well: