Cats vs Dogs Saliency Maps

When I am trying to train a model using fit( ), I am getting below error.
model.fit(train_batches,epochs=3) , for detail Attaching a screen shot.

ValueError: logits and labels must have the same shape ((None, 2) vs (None, 1))

How to overcome above problem.

2 Likes

Hi there,

The error is telling you your predictions and labels are not of the same size, so when calculating the loss that relevant fuction is not working. Check the shapes of these two implemented in your code.

1 Like

As gent.spah said.
My guess is the labels are not one_hot tensors. Check if you are passing:

[1,
 2,
....]

rather then the one-hot format, being:

[1, 0,
 0, 1,
...]
2 Likes

Ok Sir
I am using below statement to calculate expected output, is it correct.
expected_output = tf.one_hot([class_index] * image.shape[0], num_classes)

for loss
loss = tf.keras.losses.binary_crossentropy(expected_output, predictions)

1 Like

No they are both not right, understand that you have 2 dinstict classes and also read carefully instructions and comments. Labs of that week can be helpful too.

1 Like

Sir as per your suggestion I have made change , removed class index
expected_output = tf.one_hot( [labe]l * image.shape[0], num_classes)

loss = tf.keras.losses.sparse_categorical_crossentropy( expected_output, predictions)

while calling do_salience() using below statement
do_salience(ā€˜cat1.jpg’, model, 0, ā€œepoch0_salientā€)

I am getting error
ValueError: logits and labels must have the same shape ((1, 2) vs (0, 2))

How to overcome above error. Kindly convey.

1 Like

Can any one correct me, as my model architecture is same as given in assignment. I am trying to solve this 4th assignment since from many days not able to resolve it, I have followed all instruction and read all comments in week 4 and refer previous lab but didnt work for me.

I am not able to overcome the problem.

when i am using categorical_crossentropy, I am getting below error
UFuncTypeError: ufunc ā€˜add’ did not contain a loop with signature matching types (dtype(’<U32’), dtype(’<U32’)) → dtype(’<U32’)

when i am using sparse_categorical_crossentropy,I am getting below error
ValueError: Shape mismatch: The shape of labels (received (2,)) should equal the shape of logits except for the last dimension (received (1, 2)).

1 Like

Hi again,

I did tell you that both; your expected_output and loss function are not right. In the first one you are missing a variable in the second one you are not using the right loss function. Concentrate on those 2 points, think and try a few choices.

1 Like

Dear Sir, as per your suggestion in expected loss, I have added second variable
whether is it correct? I executed but got error. I am confused about second variable i.e image.shape[0], as in ungraded lab we are using inception net and here our own model.

as per my knowledge in this assignment 2 classes are their, so binary_crossentropy should work, but it didnt work for me.

kindly suggest loss function that I should try.

1 Like

Initially I experienced the same error as the OP, logits and labels not the same shape.

The guess by @guidini.ian is a good one, but in defense of us poor learners, I call your attention to the guidelines provided for the augment_images() preprocessing function:
Define a function that takes in an image and label. /begin_rant it is a pet peeve of mine that functions should have informative names. This one is named augment_images plural, but as far as I can tell it takes in and operates on a single image, not multiple /end_rant. Here’s what it says:

Create preprocessing function

Define a function that takes in an image and label. This will:

  • cast the image to float32
  • normalize the pixel values to [0, 1]
  • resize the image to 300 x 300

If you implement a function that does that, exactly that, and only that, the label parameter that is passed into the function remains unchanged. If I’m not mistaken, there needs to be one extra step there to mutate the parameter label before it is returned. A bit naughty that we are given detailed boilerplate for 3 out of the 4 steps, no? Did I miss something?

1 Like

No there is no need to change the label, the instructions are correct as given. Later on at do_salience function you are defining the expected_output which will convert the labels to one_hot encoding to be used further down on to calculate the loss.

1 Like

Interesting. I guess I have to go back and play with the code some more, because I added a one_hot conversion inside augment_image() and my logits/labels size mismatch went away. To the best of my recollection, that’s the only change I made.

Raises the question for me, if label isn’t mutated within that function, why is it passed in and returned?

Its used to map them together and create batches.

1 Like

Hi
I’m having similar difficulties as @ nisarggandhewar
With a different loss.
My function of do_salience on cat1.jpg generates the exact result as in the given output so I assume my predictions and labels have the same size. But when I try to fit the model I get an error of size miss-match. Any help would be appreciated.
This is the error log:


If I change the loss in model.fit it works but results are not good enough to pass the structural similarity test

1 Like

@Shiri_Gordon, I’m not a mentor for this course, so I can’t answer your question.

But if you don’t get an answer shortly, it could be because you posted on a thread that’s been cold for two years.

You might have better luck starting a new thread.

1 Like

Thanks!

1 Like

This is most probably happening because your output and loss function are not appropriate for each other. Probably from what I see in the assignment the predictions are not right!

1 Like

Thank you @gent.spah
I fixed all code issues and still didn’t pass the structural similarity of 0.95
I than found this thread in the course q&a:

In order to pass, the assignment it should run in the fallback mode as explained here:

Thanks again

2 Likes