C3W1_Assignment UNQ_C1 combine_sample assert confusion

The combine_sample code triggered the following assertion:

I have implemented the combining as I did in the assignment C1W3_WGAN_GP:

  1. I created a tensor of booleans named epsilon by assigning the result of comparing a tensor of torch.rand (of real image size) to be less than the given probability of real images.
  2. I then assigned to target_images the sum of real images times epsilon and fake images times logical not epsilon

I think that the assert “# Make sure that no mixing happened” might mean that tensor target_images is the correct length but I don’t know what “mixing” means.

Please help me understand the assert message.

Hi Jswat!

What you have done is logically incorrect. It would be easier for me to explain this in paper → Please refer to the attachments!

This is where you are wrong, kindly go through the instructions that are mapped to this cell once again to get a clear understanding and try again.

Regards,
Nithin

Thanks for the feedback.

1 Like

Dear All,

I am experiencing an error on the same task.

I used torch.multinomial to randomly select a proportion of uniform index indicators (1s). This produced an index at random positions of the correct proportions. I then used the index to select a subset from the sample. I did this for the real and fake indices. I concatenated the selected samples. I also concatenated the selected indices. I sorted the combined indices in ascending order and used these to select the concatenated samples.

I am however, not getting the order right and am incurring the following assert error:

 36 # Make sure that the order is maintained

—> 37 assert torch.abs(test_combination - test_reals).sum() < 1e-4

Thanks in advance,

Mark

Have you seen this thread? Please read from that post forward and let us know if that sheds any light or looks related to your issues.

It may be that I’m simply not understanding what you mean, but that all sounds a bit worrying. The point is to do things “in place” and also “row-wise”, not “element-wise”. You clone the “reals” and then replace some of those samples (in-place) with fakes.

I also did not see any reference to torch.multinomial in the instructions. They do discuss using torch.rand or torch.bernoulli. I used torch.rand FWIW …

I had a look at the other thread and it makes much more sense now. It did say there were multiple ways to sort this, but what I did was wrong. I randomly sampled from each of real and fake and concatenated them in a way that the indexing wouldn’t have worked either. The solution described on the thread that you linked works. It swaps elements nicely preserving the order. Thanks again for your help.

1 Like