Improving Deep Neural Networks-Week3 TensorFlow Introduction

I am stuck trying to pass the test one_hot_matrix_test() and I believe that function has a bug. The asserts are using lists and the results from my one_hot_matrix are vectors. I have duplicated the function and change the list to vectors and my code passes. This is the ONLY error on the assignment.
Your test function is using:
assert np.allclose(result, [0., 1. ,0., 0.] ), “Wrong output. Use tf.one_hot”
but it should be:
assert np.allclose(result, [[0.], [1.], [0.], [0.]] ), “Wrong output. Use tf.one_hot”

Can you please fix the function?

Thanks

I think you’ve posted this message in the wrong course area. You put it in Course 4, but I think Course 2 is correct.

Yes, my bad. It is Course 2, but the name and week are correct. How can I move the message to the right place? I reposted the message in C2W3.

I am still waiting for some help here. I cannot advance in my course because of this. How do I ask for support help on this issue?

Thanks

You can move a thread by using the “pencil” icon on the title. I moved it for you to Course 2.

Have another look at the instructions. Did you include the tf.reshape as the instructions suggested? Maybe the parameters you used are not what was intended. I did not have any problem passing the test cases as listed. As you can see by examining the test cases, they are expecting a 1D tensor as the output, so you have to use tf.reshape to create that.

I am also having the exact same issue, can someone please clarify it. I have used tf.one_hot and tf.reshape as asked in the question, and running it on my demo test it works fine, but the test on the workbook gives an error.

Here, under Test1, result.shape[0] will be equal to 1(as shown in [0., 1., 0., 0.]) but depth is equal to 4. This might be the issue in my opinion because changing result.shape[0] to result.shape[1] passes the test.

Please clarify it

That means you did not correctly use “reshape” to convert your output to a 1D tensor. You can see that the test logic expects 1D, not 2D tensors. In that case there is only one element of the shape, right?

Also note that test cell is not modifiable in any case, so you can’t change it.

The fact that there are only one set of square brackets there is the clue. Try this and watch what happens:

zed = tf.constant([0., 1., 0.])
print(zed)
zed = tf.constant([[0., 1., 0.]])
print(zed)
zed = tf.constant([[0.], [1.], [0.]])
print(zed)

Running that gives this result:

tf.Tensor([0. 1. 0.], shape=(3,), dtype=float32)
tf.Tensor([[0. 1. 0.]], shape=(1, 3), dtype=float32)
tf.Tensor(
[[0.]
 [1.]
 [0.]], shape=(3, 1), dtype=float32)

See the difference in the shapes and the brackets? The first value is a 1D tensor and the others are 2D. Note that a 1D tensor is a “vector” with a length. Notice that the “shape” has only one element, meaning that the idea of “row vector” or “column vector” just doesn’t apply, right? It’s just a vector with no orientation. You need that second dimension in order to get orientation.

Got it sir, thanks. I was giving (depth, -1) as arg of reshape instead of (depth) only.
Thanks once again.

No, I didn’t modify the content of test cells. I just copy pasted the whole cell into new one and made changes on that

It’s good that you figured out how to perform that experiment. Of course the larger lesson here is that if you feel that you need to modify the test code, what you really should be doing is figuring out why your code does not meet the expectations of the test code. :nerd_face:

Yup! Got it.
But running the very next cell to it gives some error. Even though the next cell was pre-written

Can you tell me what I am doing wrong here, all tests above this cell have been passed perfectly.

It’s the same lesson again: it’s not a problem with the test cell, it’s a problem with your code. Now you need to figure it out.

That error message means that the reshape call is not happy with the “shape” argument you passed it: you probably just used a scalar, since you only have one dimension. The “shape” argument needs to be a python tuple of integer values. Try “(depth)” or maybe “(depth,)” or even “(-1,)”.

1 Like

Solved!
Thanks a lot. I didn’t know that “shape” requires tuple of integer values, need to look documentation carefully.

Interesting. It turns out using “(depth)” also fails with the same error message. You need a comma in there to convince reshape it is rank 1 as opposed to rank 0.

Now of course there is another question here, to which I don’t know the answer: why does the specific test cell for one_hot_matrix pass with the rank 0 shape? But then it fails later.

Yup, it didn’t work with (depth) but worked with [depth], (depth,) and (-1,).

For the test cell of one_hot_matrix, I guess there won’t be any function that requires rank to be 1 and not 0.

Really glad this thread was here, thank you so much for iterating; I hit exactly the same issue.

@paulinpaloalto the bit that was most confusing to me was in Course 1 Edward seemed pretty adamant that we should always avoid creating rank-1 arrays and favor (N,1) columnar matrices. There are two considerations/questions here for me:

  1. I’m assuming that this is a required constraint from the tf map function, however it would seem to imply that iteration, rather than vectorized transform, has to be done by the map function unless it’s doing some exciting transform magic on the one_hot_matrix function itself
  2. At very least the code documentation is incorrect. At time of writing one_hot -- tf.Tensor A single-column matrix with the one hot encoding. - this would fit the strong guidance from Course 1, however fails the tests as described in this thread which require the rank-1 (or, as you point out rank-0 as well) array.

keen to hear your thoughts.

Hi, Tim.

Glad to hear the thread was helpful. The use of the “map” function is new in the updated course as of April 2021 (part of the conversion from TF1 to TF2). I really haven’t had time to go any deeper on how TF.map works, but it looks like it must be the moral equivalent of a python “iterator”, so it can’t handle an input with more than one dimension, even if one of the dimensions is trivial.

You’re right that Prof Andrew Ng made a big deal about 1D arrays not being the way to go in Course 1. But back there we were dealing with pure python and numpy. Apparently life gets more complicated once you get TF into the picture. Not everything is in our control in the same way as when we are writing everything ourselves from scratch. It’s the classic good news/bad news story. The good news is you don’t have to write everything yourself. The bad news is you have to deal with the APIs as they are, not as you wish them to be. :nerd_face:

It’s a good point that the docstring for the one_hot function is at best misleading and should be fixed. I’ll file a bug. Thanks for making this point!

Thanks Paul, and my apologies for both getting Prof Ng’s name wrong and being overly informal at the same time!

The TF API’s are indeed quite a departure from the NP ones and have some pretty interesting mechanics to boot!