Why are 2 test cases failing

GRADED FUNCTION: two_layer_model

def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
“”"
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.

Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 1 if cat, 0 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations 

Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""

np.random.seed(1)
grads = {}
costs = []                              # to keep track of the cost
m = X.shape[1]                           # number of examples
(n_x, n_h, n_y) = layers_dims

# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
#(≈ 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE

Moderator Edit: Solution Code Removed.
return parameters, costs

def plot_costs(costs, learning_rate=0.0075):
plt.plot(np.squeeze(costs))
plt.ylabel(‘cost’)
plt.xlabel(‘iterations (per hundreds)’)
plt.title(“Learning rate =” + str(learning_rate))
plt.show()

Cost after iteration 1: 0.6930054580300078
Cost after first iteration: 0.693049735659989
Cost after iteration 1: 0.6927286108981437
Cost after iteration 1: 0.6927286108981437
Cost after iteration 1: 0.6927286108981437
Error: Wrong output for variable W1.
Error: Wrong output for variable b1.
Error: Wrong output for variable W2.
Error: Wrong output for variable b2.
Cost after iteration 2: 0.6922979697910279
Error: Wrong output for variable W1.
Error: Wrong output for variable b1.
Error: Wrong output for variable W2.
Error: Wrong output for variable b2.
2 Tests passed
2 Tests failed

AssertionError Traceback (most recent call last)
in
3 print("Cost after first iteration: " + str(costs[0]))
4
----> 5 two_layer_model_test(two_layer_model)

~/work/release/W4A2/public_tests.py in two_layer_model_test(target)
75 ]
76
—> 77 multiple_test(test_cases, target)
78
79

~/work/release/W4A2/test_utils.py in multiple_test(test_cases, target)
140 print(‘\033[92m’, success," Tests passed")
141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143

AssertionError: Not all tests were passed for two_layer_model. Check your equations and avoid using global variables inside the function.

First, sharing your code is not allowed by the community code of conduct. So, avoid sharing your code.

Second, it is not enough to say, “Why are 2 test cases failing” and share your whole code. You have to share the Week number, assignment number, and full error only.

Regarding your error, your argument to the linear_activation_backward function is wrong. Check the instructions again:

def linear_activation_backward(dA, cache, activation):
    ...
    return dA_prev, dW, db