Question regarding the cost in C1-W4 assignment (Deep Neural Network - Application)

  • I tried to enter my data into the code of the two_layer_model of the C1-W4 assignment (Deep Neural Network - Application).
  • I edited nx, nh, ny to make them suitable to my data dimensions.
  • However, when I tried to calculate the cost, and there was an error message which says "TypeError: Cannot cast array data from dtype(‘float64’) to dtype(‘<U32’) according to the rule ‘safe’
  • How could I fix this error?

-the first attached photo is the original code.

  • the second one is my implementation.

{moderator edit - solution code removed}

What is the type and values of your Y inputs?

Also note that we shouldn’t really be publishing the source code here, since your solution is the assignment code.

I apologize for publishing the source code here, I did not know that is not allowed​:sweat_smile:.
Today I managed to fix the previous error, and the network is running.
However, I still have a problem regarding normalizing Y inputs.
My Y inputs are the classes (M1,M2,…,M5). However, I transformed them into numerical value from 1 to 5. but there is a problem when I try to normalize these numbers (from 1 to 5). the normalized victor gives zeros instead of values between 0 and 1. I am using ‘MinMaxScaler’ to normalize the Y inputs (1,2,…,5). What could be the problem?

please see the output from the pictiure.

-when I use ‘MinMaxScaler’ for the X_input it works correctly. the original values for X_input are between (-2.5 and 5.6). and after applying the ‘MinMaxScaler’, the normalized values are between (0 and 1) .
as you could see in the following picture

but it did not work on the Y_input.

Hello @Yahya_Ghareeb, it looks to me that your Y_input is an array of string. You see the quotation marks? They are signs that those are strings.


Good point! So you should not need to “normalize” the Y values: just convert them to integers representing the 5 classes of the output. Then use softmax as the output activation function and the corresponding “categorical” version of cross entropy loss. If you are not familiar with softmax, you should take DLS Course 2 or watch some lectures from Geoff Hinton. Here’s his lecture about softmax as a YouTube video. You will need that in both the Feed Forward case and the ConvNet case, since the output layer will look the same in both implementations.

1 Like

there are quotation marks because I used the following line for printing
print (str(y_test)).

thanks for your suggestion it works now because I was using y_test as 1, 2, 3, 4, and 5. Now I change them into 0, 0.25, 0.5, 0,75, and 1 and the network now is working.

Thanks for your support​:smiley:

That’s good to hear. Did you switch to using softmax also or are you still using sigmoid as the output activation?