C2W2 Assignment at colab

I have completed the assignment at coursera.
Trying to run the same code at colab. Works all fine till Exercise 7.
Gives attribute error for

context.run(transform, enable_cache=False)

Colab installs tf version 2.4.1 and tfx version 0.30.0.

Could it be due to this version difference?

Hi @bolt99
welcome to the community!

In my opinion the issue you are facing is not related to the tf and tfx versions because in the same lab you have already run the statement context.run without problem according to what you say.
First take a look at the following link to be sure that the Colab setup is fine.

Then I can suggest a couple of attempts to fix the issue.

  1. Maybe there is a problem in the previous statement where transform is built.
  2. Maybe you environment is dirty. I faced a similar problem in an ungraded lab and I had to clean the ‘pipeline’ folder. To do that take a look at the following post.


Thank you so much for your prompt response. I will try your solution.

1 Like

Hi Fabio,
I ran the C2W2 Assignment notebook right from scratch at colab. It got the same error as before:
AttributeError: ‘Tensor’ object has no attribute ‘indices’.

Mind you the same transform function has worked in the coursera notebook and I have successfully submitted the assignment.
The error is only when I run it at colab.
Please help.

Hi Sanjoy
I remember a similar problem in the posts I have seen …let me check. Someone else has faced the same problem you are saying.
Keep in touch

HI Sanjoy
here I am. PLease take a look at the following thread.
It fixed a similar problem running a notebook for Course 1 under Colab.
In the meantime I will try to figure out a different solution

It sounds as a version disalignment for tensorflow.
I’m going to run the lab on my Colab…I’ll keep you posted.

just for my understanding: I have noticed that another issue has been fixed by @chris.favila ? is it a previous one? I have assumed that the current issue has occurred later the fix of the previous one. Correct?


Yes that’s a different problem and it’s been solved.

This one is version related because all the 19 frames in the error message are internal to the libraries used.

Don’t understand the problem because colab starts with a blank jupyter notebook which is what we need.

JupyterLab may have issues running is colab.

Hi Sanjoy and Fabio! Based on the error message, I think this is related to one of the features already being a Tensor and calling _fill_in_missing() on it. That method only works for SparseTensors because Tensors don’t have an indices property.

I recommend trying to remove the use of _fill_in_missing() in the transform module and see if it works.

It maybe that with the later version of TFX, the features are converted automatically to dense tensors instead of sparse ones. In previous versions, you will have to set the infer_feature_shapes argument in SchemaGen to True so the features using the schema will be parsed as a tensor.

Hope this helps!


Will do that Chris and revert

You were right Chris. Worked correctly after removing _fill_in_missing processing step on label.
One more thing. transformed examples go in Split-train and Split-eval folders as against train and eval folders. This also are new version related changes of tfx.
Thanks a ton

Instead of removing _fill_in_missing function on label, added an if within the function to check if label is not a sparse tensor, in which case keep it as it is. In this assignment, label is dense tensor. So this if will work and output as dense tensor as needed.

if not isinstance(x, tf.sparse.SparseTensor):
return x


def _fill_in_missing(x):
if not isinstance(x, tf.sparse.SparseTensor):
return x

    default_value = '' if x.dtype == tf.string else 0
    return tf.squeeze(
      tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),

@bolt99 I was trying your answer to solve the aforementioned problem but I have the sense to be missing something since I am now getting the error that ‘NoneType’ object has no attribute ‘name’.