Hi!
The concept that’s being discussed here is called Higher Order Functions Higher Order Functions in Python - GeeksforGeeks
Let’s go through two examples from the GeeksforGeeks website (linked above) that are similar to your question.
# Python program to illustrate functions
# can be treated as objects
def shout(text):
return text.upper()
print(shout('Hello'))
yell = shout
print(yell('Hello'))
The above shows us that we can make a function and assign a reference of it to another variable,
in this case the “yell”. We can then use yell as if it was the shout function. This is important for the next code example.
# Python program to illustrate functions
# can be passed as arguments to other functions
def shout(text):
return text.upper()
def whisper(text):
return text.lower()
def greet(func):
# storing the function in a variable
greeting = func("Hi, I am created by a function passed as an argument.")
print(greeting)
greet(shout)
greet(whisper)
In the second code example a function is being passed into another function and being called inside that function.
When the function is being passed into the other function think of it as similar to the first example where yell = shout. But this time func = shout or func = whisper depending on if it’s being called as greet(shout) or greet(whisper).
func exists inside the greet function and can be used as if it was the shout or whisper functions.
Now:
def gradient_descent(x, y, w_in, b_in, alpha, num_iters, cost_function, gradient_function)
Is the same as greet(func) only this time cost_function = compute_cost and gradient_function = compute_gradient when they are passed into the function.
This is done in the lab here:
w_final, b_final, J_hist, p_hist = gradient_descent(x_train ,y_train, w_init, b_init, tmp_alpha,
iterations, compute_cost, compute_gradient)
Can the parameter functions have different names? Yes! all you have to do is use those new names in the function when you change them.
So if you change the method signature to:
def gradient_descent(x, y, w_in, b_in, alpha, num_iters, compute_cost, compute_gradient):
and gradient_descent is called like in the lab again:
w_final, b_final, J_hist, p_hist = gradient_descent(x_train ,y_train, w_init, b_init, tmp_alpha,
iterations, compute_cost, compute_gradient)
then compute_cost (method variable) = compute_cost (function defined in lab) and compute_gradient (method variable) = compute_gradient (function defined in lab)
The method variables can now be used in the method as the functions passed into them.
Hope this clears things up!