def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
This is the in-build function provided. All other parameters are derived. But I’m confused about the activation(ReLU and Sigmoid). Do we need to implement or can we get this from somewhere?