# Personal project issues

Hello! I hope you are doing well.
I’ve completed all the courses of MLS and the first course of DLS. Now I am creating my own NN model from scratch, using random data. However, I am facing some issues (attached picture). I am using relu for both hidden and output layers (input = numerical value; output = numerical value). Kindly guide me how to resolve this, I will be highly indebted to you.

About data:
X shape is = (20, 1)
y shape is = (20, 1)
n_x is = X.shape[0] = 20
n_h is = 7 (units/neurons in hidden layer)
n_y is = y.shape[0] = 20
L is = len(parameters) // 2 = 2

If you need, I will send you the notebook.

Warm Regards,
Saif Ur Rehman.

It seems you are dividing by zero is the problem and is probably the A variable, which as far as I remember are the weights AX+B (right?), so you need to have a look where exactly this division by zero is happening and fix it. I am guessing you might be initializing the weights all to zero maybe its better to have them random. Also make sure the cost computation formula is right!

1 Like

A is np.maximum(0,Z) while Z is np.dot(W,X) + b
I cannot figure out where that division by zero is. I initialized weights randomly, multiplying by 0.01.

Actually, I copied all the functions from DLS_C1_W4_assignments and changed the input, output, and little modifications. But still struggling to find out my mistake(s).

1 Like

Was there any inf or nan in your (normalized) data?

You may check it with `np.isinf(...)` and `np.isnan(...)`

1 Like

I just checked this. All are False, means no nan or inf.

I generated data like this:
x = np.arange(0, 20, 1)
y = 3*x**2 + 3
X = x.reshape(-1, 1)
y = y.reshape(-1,1)

1 Like

You probably have negative values inside the np.log, which gives nan.

How to solve this issue?

Well either np.log(AL) or np.log(1-AL) is becoming negative, so make sure to trace it why is it becoming negative and avoid it to become negative. I dont remember the notation right now but that your start.

Your data is a regression problem. Which cost function are you using? What is the activation function in the NN’s last layer?

cost = - np.sum(np.multiply(np.log(AL),Y) + np.multiply(np.log(1-AL),(1-Y)))/m
cost = np.squeeze(cost)

I am using relu for both hidden and output layers.

1 Like

So this is the problem. From your data it is a regression problem, but you are not using a cost function for a regression problem.

Your NN can produce very large positive number which will trigger the problem as @gent.spah explained.

Oh, I got it.
I should use cost = np.sum((y-yhat)**2)/2m, right?

That will do!

Raymond

2 Likes

Problem solved. I successfully trained my first NN from scratch. Thank you so much @rmwkwok and @gent.spah for your time and guidance.

1 Like

Hello, @gent.spah and @rmwkwok. . .

My first NN model performs poorly and still is giving one error (attached picture). After several hours of debugging, I realize that maybe I am using wrong derivative for A2 (output). I copied this derivative from DLS assignment which was for logistic regression: dA2 = - (np.divide(Y, A2) - np.divide(1 - Y), (1 - A2))
But my own model is dealing with linear regression. Do I need to change dA2? If yes, what is it? Sorry to bother you again.

Linear regression and logistic regression formulas have different derivatives as far as I remember so it would make a difference.

1 Like

Hello @saifkhanengr! I think this post has the answer. Also you are using ReLU for hidden layers as well, so that is your reference.

2 Likes