C1W4_Assignment error with fit

In the code for creating train_happy_sad_model(train_generator) I am using this code to compiler and fit the model

# Compile the model
# Select a loss function compatible with the last layer of your network
model.compile(loss='binary_crossentropy',
              optimizer=tf.optimizers.Adam(),
              metrics=['accuracy'])     


# Train the model
# Your model should achieve the desired accuracy in less than 15 epochs.
# You can hardcode up to 20 epochs in the function below but the callback should trigger before 15.
history = model.fit(x=train_generator,
                    epochs=20,
                    callbacks=[callbacks]
                   )

But I am getting this error:
UFuncTypeError: Cannot cast ufunc ‘multiply’ output from dtype(‘<U32’) to dtype(‘float32’) with casting rule ‘same_kind’

It seems to have to do with the call to fit. I am not sure what I am doing wrong since my code for creating train_generator seems to be working and () and above the code in error look to be balanced. So, any help would be appreciated.

Hello @Alankar_Kampoowale

You are not suppose to codes on public post threads. Kindly remove them, it is against community guidelines. You can always post a screenshot of the error you have encountered.

Share a complete image of the error, what I can notice the metrics which are using needs correction, like there is difference between binary crossentropy and losses.binary_crossentropy and the later needs to be used. next if you do not get desired outcome, you can trying changes your optimizer choice.

But the reason of your issue is because of the datatype difference between loss and optimizer creating this error, also even if your tf for Adam, one needs to mentions it as tf.keras.optimizer.Adam and not as you mentioned.

Let me know if you still have issue after the correction.

Regards
DP

Hi,
I tried changing the loss to “losses.binary_crossentropy” and optimizer to “rmsprop”. I also changed the activation function in the output layer to “sigmoid” with only one node. But that error is still happening. I am attaching a screenshot of the error that I am getting. I hope this helps in figuring out the problem.

How is the datatype recalled in your codes?? The error points that you are trying to convert datatype with an incorrect code

I am not quite sure what is meant by “datatype recalled in your codes”? Which part of the notebook is supposed to have that? Is it in the part where train_happy_sad_model() is defined or where image_generator() is defined or is it somewhere else? If you could let me know that, I could tell you more about my code without actually posting the code itself.
Thanks

Kindly DM screenshot of your codes. Click on my name and then message.

Hi @Alankar_Kampoowale

Issue with your codes shared for two grade cell.

GRADED FUNCTION: image_generator

  1. Instantiate the ImageDataGenerator class.
    Remember to set the rescale argument.
    In the above instruction, code states rescale to be set to correct argument and you mention rescale=‘True’ is incorrect argument. You will get answer for this by referring to the ungraded lab, you are suppose to assign value to rescale parameter and not assign true or false. if you check videos also you will know what value need to assigned.

  2. for train_generator, you directory argument is incorrect, you have assigned it base_dir which is incorrect. Assign the value what base_dir mentions as “./data”
    directory: should be a relative path to the directory containing the data

GRADED FUNCTION: train_happy_sad_model

  1. your unit choice is not incorrect but I would advise you to change the units and keep it more simple with layer units in case you don’t get required result for model training.
  2. for loss you are suppose to use losses.binary_crossentropy and not ‘losses.binary_crossentropy’. remember there is a difference of using ‘binary_crossentroy’ and losses.binary_crossentropy
    next optimizer choice “rmsprop” is not correctly recalled. use optimisers.RMSprop(learning_rate=0.001)

Let me know if you are still encountering any error.

Regards
DP