Week 3 Exercise 3, What is the point of reshaping Tensor?

Hi,

I am in one hot matrix exercise. I don’t understand the point of reshaping tensor. Without reshaping tensor like this:

one_hot = tf.one_hot(label, depth, axis=0)

My output becomes exact same as Test 1 but it’s different from Test 2. In test 2, it expects me to have a tensor with shape (4, ) but my tensor’s shape is (4, 1), However, with reshaping tensor I can pass both tests successfully. So, I don’t understand what reshaping does in this case, why I could pass the Test 1 but not pass Test 2 without reshaping the Tensor.

1 Like

I think the test case is designed that way so that the learner gets helpful exception messages,

First, we pass a scalar, and for that, you don’t need the reshape operation. If it fails, we know the learner didn’t use tf.one_hot.

We pass a vector for the second test case. If the output is different (and the first test case passed), we know that the learner forgot to use tf.reshape.

    label = tf.constant(1)
    assert np.allclose(result, [0., 1. ,0., 0.] ), "Wrong output. Use tf.one_hot"


    label_2 = [2]
    assert np.allclose(result, [0., 0. ,1., 0.] ), "Wrong output. Use tf.reshape as instructed"

And to the question of why we need to reshape at all. It has to do with the rest of the assignment, where we need the correct matrix dimensions to use the formulas presented in Prof Andrew Ng’s lectures. It has to do with how tensorflow operates with datasets.

You will also see that there is much transposing going on in the remainder of the exercise. Usually, that has to do with tensorflow preferring (batch_size, features), whereas Prof Andrew Ng uses (features, batch_size) for his lectures. Both ways are, of course, correct. It is a matter of taste.

2 Likes