Week 1 Jazz project--how do I print tensor object values?

I cannot figure out what arguments to put for one_hot and I am trying to see what I’ve got for the various tensor objects to troubleshoot. But when I put in, say, print(x) all I get is the tensor object back. I can see the shape, but not the values. How do I see the values? Thanks!

One way to try is to use the numpy() method to get a straightforward numpy array.
It can get complicated if the tensor is in a graph that has not yet been evaluated, but the error message will likely be a guide if you hit such cases.

1 Like

Sorry, I’ve no clue what you mean by “the numpy() method”. If it were a numpy object, I’d just do print(x) and I’d see the values, which gives the output as I described above.
Trying print(numpy(x)) (actually print(np(x))) just gives the error: TypeError: ‘module’ object is not callable
If I try print(x.numpy()) (or print.x.np()) then I get the error: AttributeError: ‘Tensor’ object has no attribute ‘numpy’
Can you please clarify?
Thanks.

1 Like

I just posted a simple example from a colab notebook.
I hope that helps to clarify.
You’ll notice that this has type EagerTensor. The numpy() method turns it into a numpy array.

1 Like

Sorry, I don’t see how this helps. It looks like printing an EagerTensor, as in your example shows the values anyway. However the tensor type created in the LSTM exercise is apparently not EagerTensor. When I type ‘x’, it is tensorflow.python.framework.ops.Tensor. Printing this object does not show the values, and x.numpy() yields the error I posted above. So neither approach shows the values.

Sorry about the confusion.
I’ve just tried to navigate into the model after running it with code like

onehot = inference_model.get_layer(index =6)
onehot_output = onehot.output
print(onehot_output)
print(type(onehot_output))
onehot_output.numpy()

I was hoping that after the model has run we’ll have the values accessible from the layers in it.
But that also gives the same problem you’d reported.

Anyone with more experience of TensorFlow debugging who can add insight?

My guess is that the problem is that @gborden1 is actually in the old version of the course and is using TF1. There’s nothing to prevent someone in the old version of the course from accessing Discourse. The clue is the error message “Tensor object has no attribute numpy()”. That is a TF2 construct. Try doing:

print("tf.__version__")

My guess is that you will see some 1.x value. In the new version of the course, here’s what you would see:

print("tf.__version__")
2.3.0

TF1 and TF2 are very different. You cannot expect TF1 code to work in the TF2 context or vice versa.

Thanks, but nope. When I execute that, it returns 2.3.0
(Actually, when I execute what you wrote verbatim, i.e. print(“tf.version”), it returns “tf.version:wink: When I REMOVE THE QUOTES, as I believe you intended, then it gives me the version!!)

Thanks for doing the check that Paul suggested.

Here is (some of) what I see as happening.
The original post mentioned onehot so I’ll use that as an example.
At the point that we create the onehot we are using code like:
x = tf.math.argmax(out, axis=-1)
x = tf.one_hot(x, n_values)

This is building part of a computation graph using the Keras functional api.
The tensor x has no values at this point, but it knows something about its eventual shape and how it will have a value calculated.

A little later the Model constructor is used to build “inference_model”
The tensors still have no value.

When we use inference_model to get some output values, data flows through the model and the tensors get actual values.
That is why I did the navigation to get a layer from the model, and then look at its output. I was hoping that that tensor would have an easily inspectable value.

I believe that there are some “tf.print” tensorflow operations that can be added to the computation graph, but I’d prefer an expert to comment on their use and whether they have been deprecated (I’ve seen some comments about deprecation)

Thanks again for your efforts. I think at this point my biggest issue is that I keep getting the error with exercise 2: ValueError: Graph disconnected: cannot obtain value for tensor Tensor(“input_2:0”, shape=(None, 1, 90), dtype=float32) at layer “lstm”. The following previous layers were accessed without issue:
My exercise 1 code passes all tests, so I get the ‘out’ in exercise 2 in the same way–should be good. Then I had already called argmax and one_hot just the way you wrote above. Finally I did the RepeatVector step exactly as directed for 2E (RepeatVector(1)(x)) and end up with a vector of the correct dimensions (None, 1 90). So it is not clear to me why I am getting this error. I’ve restarted the kernel, re-executed the cells above, etc., etc. numerous times but no change.

Sorry, I think it is just a rendering problem with Discourse. I wrote that as ‘code’ and it formats correctly when I view it. The variable is bracketed by underscores on both ends tf dot under under version under under.