Changing loss function - what changes in gradient descent?

I know there are different loss functions other than cross entropy. My (potentially stupid) question is: if i wanted to try a different loss function, what would i need to change in gradient descent in terms of calculations? Only dA^{[L]}? Do all formulas shown in the embedded image still apply?


Hello, Rodrigo Pedro.

To answer your query in a mathematical way, you need to look at the equation again. Things are in direct proportionality. If a part of it is increasing then, it will have the same impact over the other part present in the right hand side of the equation.

So, every action will have similar consequences on each of the terms involved on both sides of the equation.