ValueError: Shapes (2, 4) and (2, 6) are incompatible.

Cannot compute this one. any hints?

1: I transposed logits and labels.

2: from_logits=True

okay, I did logits[0], labels[0] and I got:

tf.Tensor(0.06969354, shape=(), dtype=float32)

AssertionError: Test does not match. Did you get the mean of your cost functions?

**Expected output**

tf.Tensor(0.4051435, shape=(), dtype=float32)

I used from_logits=False and i got:

tf.Tensor(0.39053392, shape=(), dtype=float32)

The dimension mismatch means that you â€śhard-codedâ€ť the number of classes in your â€śone hotâ€ť routine.

The other error is that you are passing the results as logits to the cost function, right? So you need to specify â€śTrueâ€ť for that flag. Hereâ€™s a thread with a bit more info.

It did not help, to be honest.

Youâ€™ve mentioned lots of different mistakes in your various posts. For example, itâ€™s a mistake to index the *labels* or *logits*: you should use the whole tensors. You need also to transpose them and you need also to set the *from_logits* flag to *True*.

So what is the current incorrect value that you are getting?

without indexing iâ€™ve got:

ValueError: Shapes (2, 4) and (2, 6) are incompatible (with transposing).

ValueError: Shapes (4, 2) and (6, 2) are incompatible ( without transposing)

I gave you the answer about that on my very first reply on this thread. Your â€śone hotâ€ť routine is incorrect: you are hard-coding the number of classes to 4. That passes the test for that function, but fails on any other test case that doesnâ€™t involve 4 classes.

The inputs for the *compute_cost* test case are the output of applying the one hot routine to one of the input datasets.

Iâ€™ve changed it to (depth,) and it works.

Thanks for your time and have a nice day.

But still a bit strange that mistake was not shown.

Yes, it would be better if they had two different test cases with different numbers of classes to catch that type of bug. But it is always a mistake to â€śhardcodeâ€ť things when you arenâ€™t forced to do that.