I do not have intuition why we end up creating a Model in music_inference_model()
inference_model = Model(inputs=[x0, a0, c0], outputs=outputs)
in music_inference_model() we trigger the forward pass and generate the output. We then store these outputs in “outputs”.
Why would we have to trigger inference again with inference_model.predict([x_initializer, a_initializer, c_initializer])? Why don’t we just use music_inference_model() and then return the outputs?
In my mind we:
- trained the model in part 2 (ltsm and densor)
- we then use the trained model in music_inference_model() to create “outputs”
- in music_inference_model() we then create a Model that is assigned those outputs (??? seems unnecessary, why are telling a Model what its output values are?)
- then in predict_and_sample() we do inference_model.predict() (??? Didn’t we already run the inference in music_inference_model(), how does it even know that it is supposed to output 100 sequences?)
Is there some python/keras magic I do not understand? I do not see the link between music_inference_model() and inference_model.predict(). How does “inference_model.predict()” know it is meant to use a generated value as input to the next value?