Hi, I’m trying to create a dataset from a list of images and I’m having issues when create the labels.
My objective is to detect multiple objects in every image so I created something like this:
labels =
# Image 1
[
[100, 50], # Object 1 attributes (x, y)
[250, 100], # Object 2 attributes
],
# Image 2
[
[50, 60], # Object 1 attributes
[200, 70], # Object 2 attributes
[450, 40], # Object 3 attributes
],
]
I have successfully trained a model when there is just one object to be detected. But when I try multiple objects it doesn’t, I have tried multiple approaches to use this label, but none have worked.
dataset = tf.data.Dataset.from_tensor_slices((X, tf.ragged.constant(y)))
model = tf.keras.Sequential([
tfl.ZeroPadding2D(padding=(10, 10), input_shape=(535, 535, 4)),
tfl.Conv2D(32, (7, 7)),
tfl.BatchNormalization(axis=-1),
tfl.ReLU(),
tfl.MaxPool2D(),
tfl.Flatten(),
tfl.Dense(2, activation='linear')
])
model.compile(optimizer='adam', loss='mae', metrics=['mse', 'mae'])
Error:
TypeError: Some of the inputs are not tf.RaggedTensor. Input received: [tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("Cast_20:0", shape=(None,), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(None,), dtype=int64)), <tf.Tensor 'sequential_24/dense_24/BiasAdd:0' shape=(None, 2) dtype=float32>]
Please help =)