C5_W1_A3: Jazz Improvisation Ex2 music_inference_model

I tried 3 different sets of values of x0, x and a, a0 and c, c0 in:

inference_model=Model(inputs=[input_x, initial_hidden_state, initial_cell_state], outputs=the_outputs)

and here are the outputs for each cases (2 errors and 1 works without error):

  1. inference_model = Model(inputs=[x, a, c], outputs=outputs)

Error/Warning:

WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "functional_6" was not an Input tensor, it was generated by layer repeat_vector_199.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: repeat_vector_199/Tile:0
WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "functional_6" was not an Input tensor, it was generated by layer lstm.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: lstm/PartitionedCall_229:2
WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "functional_6" was not an Input tensor, it was generated by layer lstm.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: lstm/PartitionedCall_229:3

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-21-fe42db94b8ce> in <module>
      1 ### YOU CANNOT EDIT THIS CELL
----> 2 inference_model = music_inference_model(LSTM_cell, densor, Ty = 50)

<ipython-input-20-299c8f0c5372> in music_inference_model(LSTM_cell, densor, Ty)
     61         #print()
     62     # Step 3: Create model instance with the correct "inputs" and "outputs" (≈1 line)
---> 63     inference_model = Model(inputs=[x, a, c], outputs=outputs)
     64 
     65     ### END CODE HERE ###

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in __new__(cls, *args, **kwargs)
    240       # Functional model
    241       from tensorflow.python.keras.engine import functional  # pylint: disable=g-import-not-at-top
--> 242       return functional.Functional(*args, **kwargs)
    243     else:
    244       return super(Model, cls).__new__(cls, *args, **kwargs)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in __init__(self, inputs, outputs, name, trainable)
    113     #     'arguments during initialization. Got an unexpected argument:')
    114     super(Functional, self).__init__(name=name, trainable=trainable)
--> 115     self._init_graph_network(inputs, outputs)
    116 
    117   @trackable.no_automatic_dependency_tracking

/opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _init_graph_network(self, inputs, outputs)
    182       # It's supposed to be an input layer, so only one node
    183       # and one tensor output.
--> 184       assert node_index == 0
    185       assert tensor_index == 0
    186       self._input_layers.append(layer)

AssertionError:
  1. inference_model = Model(inputs=[x, a0, c0], outputs=outputs)

Error:/Warning:

WARNING:tensorflow:Functional inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "functional_7" was not an Input tensor, it was generated by layer repeat_vector_249.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: repeat_vector_249/Tile:0

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-23-fe42db94b8ce> in <module>
      1 ### YOU CANNOT EDIT THIS CELL
----> 2 inference_model = music_inference_model(LSTM_cell, densor, Ty = 50)

<ipython-input-22-ee031c1910b9> in music_inference_model(LSTM_cell, densor, Ty)
     61         #print()
     62     # Step 3: Create model instance with the correct "inputs" and "outputs" (≈1 line)
---> 63     inference_model = Model(inputs=[x, a0, c0], outputs=outputs)
     64 
     65     ### END CODE HERE ###

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in __new__(cls, *args, **kwargs)
    240       # Functional model
    241       from tensorflow.python.keras.engine import functional  # pylint: disable=g-import-not-at-top
--> 242       return functional.Functional(*args, **kwargs)
    243     else:
    244       return super(Model, cls).__new__(cls, *args, **kwargs)

/opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in __init__(self, inputs, outputs, name, trainable)
    113     #     'arguments during initialization. Got an unexpected argument:')
    114     super(Functional, self).__init__(name=name, trainable=trainable)
--> 115     self._init_graph_network(inputs, outputs)
    116 
    117   @trackable.no_automatic_dependency_tracking

/opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _init_graph_network(self, inputs, outputs)
    189     # Keep track of the network's nodes and layers.
    190     nodes, nodes_by_depth, layers, _ = _map_graph_network(
--> 191         self.inputs, self.outputs)
    192     self._network_nodes = nodes
    193     self._nodes_by_depth = nodes_by_depth

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _map_graph_network(inputs, outputs)
    929                              'The following previous layers '
    930                              'were accessed without issue: ' +
--> 931                              str(layers_with_complete_input))
    932         for x in nest.flatten(node.outputs):
    933           computable_tensors.add(id(x))

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_6:0", shape=(None, 1, 90), dtype=float32) at layer "lstm". The following previous layers were accessed without issue: []
  1. inference_model = Model(inputs=[x0, a0, c0], outputs=outputs)
    Works without ERROR!

Could somebody explain why we should still use x0 instead of x? If that’s the case why do we calculate argmax and one_hot encoder for x in the first place?
Cheers,

Hello @mrgransky,

When we program with tensorflow, it is good to think of it as “drawing a connected graph”. For example, everytime we use x as an input to a tensorflow function (e.g. argmax or one_hot), we connect that name x to a box called (e.g.) argmax which outputs another name that we are free to call it as x again. Similarly, the next time we use the new x in another operation, we are making new connection.

We keep expanding the graph to instruct the computer how variables get passed from one operation (box) to the next operation (box).

Now that we think it as a graph that chains up a number of operations (boxes), there should be two ends of that chain, right? The input(s) and the output(s). Tensorflow needs us to explicitly state what they are expected to be. This is why we have to use two different names: x0 and x so that, in the end, we can differentiate an input from its descendent, intermediate x that is derived from x0. If we did not use two different names, but just x all the time, we lose track of that origin. In other words, x0 keeps the information about the input to the model and if we replace it with something else, we lose track of that information and will never be able to tell Tensorflow that piece of information which is required by Tensorflow.

Cheers,
Raymond

1 Like