ValueError while calling the fit method

I am trying to implement the following code:

import numpy as np
import tensorflow as tf

class SimpleDense(tf.keras.layers.Layer):
def init(self, units):
super(SimpleDense, self).init()
self.units = units

def build(self, input_shape):

    w_init = tf.random_normal_initializer()
    self.w = tf.Variable(name = "kernel", initial_value = w_init(shape = (input_shape[-1], self.units), dtype = 'float32'), trainable = True)
    b_init = tf.zeros_initializer()
    self.b = tf.Variable(name = "bias", initial_value = b_init(shape = (self.units,), dtype = 'float32'), trainable = True)

def call(self, inputs):
    return tf.matmul(inputs, self.w) + self.b

x = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype = float)

y = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype = float)

model = tf.keras.models.Sequential([SimpleDense(units = 1)])

model.compile(optimizer = ‘adam’, loss = ‘mean_squared_error’)

model.fit(x, y, epochs = 500, verbose = 0)

print(model.predict([10.0]))

But I keep getting the following error:

ValueError: Exception encountered when calling layer "sequential_7" (type Sequential).

Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.

Call arguments received by layer "sequential_7" (type Sequential):
  • inputs=tf.Tensor(shape=(None,), dtype=float32)
  • training=True
  • mask=None

You are using the example in Lab 2, W3 I see. Why dont you try to instantiate the SimpleDense outside the Sequential and then pass it in the Sequentia, I see all others are the same. Also you could probably remove the super(…).init().

I dont think removing the super method should be done, since super is responsible for acquiring the inheritance. The _trainable attribute does not work if super() is removed.
I tried instantiating the SimpleDense outside the Sequential and passing the instantiated object in the Sequential, but it still gives the same error

Interestingly, it works perfectly well in the collab notebook but not on jupyter notebook on my laptop

I am still searching for a solution to this problem. Any help would be greatly appreciated!

Super() was needed from previous versions of python as far as I know but its not necessary on this version but shouldn’t change much in any case.

If it works in Colab but not in your machine then I would thoroughly check installed API’s that might be missing from your machine but are present in colab, so some investigative work needs to be done by you. That’s my suggestion!

I am having the same issue, please advise.

I ran into this same problem when running the code in Google Colab which is using Tensorflow 2.8.1. The same code works in the Coursera notebook which is running Tensorflow 2.1.0. (These defaults change over time, so you may get different results.)

I was able to fix it by changing the dimensions of xs to be more like input that tensorflow would expect.

xs = np.array([-1.0,  0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
xs = np.expand_dims(xs, axis=-1)

This changes the shape of xs from (6,) to (6, 1).

Adding this call to reshape xs is also backwards compatible with Tensorflow 2.1, so I’m guessing that there was some deprecation at some point.

2 Likes