DeepLearning.AI
Derivation of the gradients of W^[2], and b^[2] in the 1 hidden neuron network
Course Q&A
Deep Learning Specialization
Neural Networks and Deep Learning
week-3
saifkhanengr
March 17, 2025, 1:36pm
2
Check
this
YouTube guide of Eddy Shyu and this
chain rule
.
1 Like
show post in topic
Related topics
Topic
Replies
Views
Activity
The intuition of db^[l]=dz^[l] and da^[l-1]=w^[l-1].dz^[l]
Neural Networks and Deep Learning
4
784
May 27, 2023
W2_A1_Calculating gradient descent with variables Dw and db
Neural Networks and Deep Learning
5
1020
December 8, 2023
W3_A1_Derivative for hidden neural layers (Backprop)
Neural Networks and Deep Learning
5
608
February 9, 2023
Gradience Descent Backpropagatin Calculation
Neural Networks and Deep Learning
1
624
January 24, 2022
How did we calculate dz[2] in Backpropagation Intuition (8:34)?
Neural Networks and Deep Learning
1
645
March 6, 2022