Course-5, Week-1, Assignment-3, Exercise-1: Jazz improvisation with LSTM, _SymbolicException during model.fit()

I was working on Jazz Improvisation With LSTM assignment of course-5: Sequence Models. I’m getting the below error(screenshot) during the model.fit() function. My djmodel & model.compile functions ran successfully. I’m not able to figure out what this error means. Please help. Thank you.

Do you still need help with this issue?

I still need help with this

"
TypeError Traceback (most recent call last)
in
----> 1 history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
→ 108 return method(self, *args, **kwargs)
109
110 # Running inside run_distribute_coordinator already.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1061 use_multiprocessing=use_multiprocessing,
1062 model=self,
→ 1063 steps_per_execution=self._steps_per_execution)
1064
1065 # Container that configures and calls tf.keras.Callbacks.

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in init(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution)
1115 use_multiprocessing=use_multiprocessing,
1116 distribution_strategy=ds_context.get_strategy(),
→ 1117 model=model)
1118
1119 strategy = ds_context.get_strategy()

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in init(self, x, y, sample_weights, sample_weight_modes, batch_size, epochs, steps, shuffle, **kwargs)
273 inputs = pack_x_y_sample_weight(x, y, sample_weights)
274
→ 275 num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs))
276 if len(num_samples) > 1:
277 msg = “Data cardinality is ambiguous:\n”

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py in (.0)
273 inputs = pack_x_y_sample_weight(x, y, sample_weights)
274
→ 275 num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs))
276 if len(num_samples) > 1:
277 msg = “Data cardinality is ambiguous:\n”

TypeError: int() argument must be a string, a bytes-like object or a number, not ‘NoneType’

print(f"loss at epoch 1: {history.history[‘loss’][0]}")
"

I think that means your model() function contains a “None” value that shouldn’t be there.
Or it could be in one of the functions that model() calls.

I can’t figure it out , I sent my code in private message.

I’m going to do some speculating here. I may be totally wrong.

The line of code that triggers the error is:

history = model.fit([X, a0, c0], list(Y), epochs=100, verbose = 0)

In that line of code, a0 and c0 are initialized a couple of cells earlier. That’s probably safe.
But X is a global variable, which is initialized way up in the 3rd cell in the notebook, at this line:

X, Y, n_values, indices_values, chords = load_music_utils('data/original_metheny.mid')

I think it’s possible that something in your code has modified the global variable “X”, and it’s causing that “TypeError: int() argument must be a …” error message.

Either that, or you haven’t re-run all of the cells from the top of the notebook, and X has been modified in some incorrect way.

So maybe the error is not in your djmodel() function, it’s somewhere else in your notebook.

I recommend you rename your current notebook, get a new copy of the notebook, and start over being very careful to NOT modify anything outside of the functions you’re working on.