Hi @Nick_He
Welcome to the community.
This is a topic that i not being working in a while, but, i will try to help you out.
So, the additional factor of (1/m) in the derivative expression comes from the chain rule of calculus. Let’s go through the derivation step by step to understand where it comes from.
Given the cost function J, which is defined as the mean of the loss function L over m training examples:
J = (1/m) * SUM(L(yhat, y)) over all training examples
Now, we want to calculate the derivative of J with respect to the weights W. In order to do that, we can use the chain rule of calculus.
Let’s denote dJ/dZ as the derivative of J with respect to the weighted sum Z. It means, dJ/dZ = (1/m) * SUM(dL(yhat, y)/dZ) over all training examples.
Next, we want to find dZ/dW, which represents the derivative of the weighted sum Z with respect to the weights W. Since Z = WX + b, the derivative of Z with respect to W is simply X.
Now, to calculate dJ/dW, we apply the chain rule:
dJ/dW = dJ/dZ * dZ/dW
Substituting the expressions we derived earlier:
dJ/dW = (1/m) * SUM(dL(yhat, y)/dZ) * X
Now, the factor of (1/m) comes into the derivative expression because of the chain rule and the definition of the cost function J as the mean of the loss function L. When we take the derivative of J with respect to the weights W, we have to consider that J is an average over m training examples. The derivative of an average is equal to the average of the derivatives, which leads to the (1/m) factor.
Intuitively, the (1/m) factor is there to ensure that the gradients are appropriately scaled when performing stochastic gradient descent (SGD) or other optimization algorithms. Without this scaling, the learning rate might be too large, leading to unstable or inefficient learning.
In summary, the (1/m) factor in the derivative expression comes from the chain rule and the definition of the cost function J as the mean of the loss function L over m training examples. It is a mathematical necessity to ensure the gradients are properly scaled during the training process.
Best regards
elirod