LSTM future values prediction

Hi there, I am dealing with LSTM forecast in Course C4 W4 in Sequences, Time Series and Prediction within DeepLearning.AI TensorFlow Developer Professional Certificate Specialization and I hope someone can help me out. In all the given examples we in fact compare the predicted data from model.predict with validation data by means of below mentioned code - so we see how well the model predicts. However, how can I predict the future, e.g. + 300 (in depends on given dataset - days, months, whatever, …) ? I understand I should use windowed series + something to get prediction.

Thank you,

Please see this user guide and move your post to the right topic.

Hi Jan. Thank you for updating the topic.

In this lab, Model predicts 1 timestep in the future for window_size timesteps of input data. Think of it as a list of lists where each inner list has window_size worth of data. The outer list represents the batch dimension.

Should you choose to make your model predict more than 1 timestep in the future, say, 10 steps, the following changes are required:

  1. The model output layer should now have 10 output units since the Dense layer now outputs 10 timesteps worth of data instead of 1.
  2. y_valid would be a list of values for each prediction. Care needs to be taken when generating data for training as well.

Hi Balaji I really appreciate u try to help me, but I simply cannot figure this one out. I havet tried a lot but no success. Windowing is pretty straightforward for me, with window_size = 20 and batch_size = 16 I get:

  • tf.Tensor: shape=(16, 20), dtype=float32, numpy= array for values, and
  • tf.Tensor: shape=(16,), dtype=float32, numpy= array for targets,

The dataset is shifted+1 and shuffled inside the nested list. But the rest no way … could you pls. give me more clues?
Thank you in advance,


Here are a few hints to build a model that takes in window_size data points as input and produces 2 steps of prediction as output.

When creating the dataset, the changes are:

dataset = dataset.flat_map(lambda window: window.batch(window_size + 2))
dataset = window: (window[:-2], window[-2:]))

Here’s an example after making the above changes:

for X_batch, y_batch in train_set.take(1):
    print(f'Model input has shape {X_batch.shape}')
    print(f'Model output has shape {y_batch.shape}')
    print(f'Single input looks like this: {X_batch[0].numpy()}')
    print(f'Output for the above input is: {y_batch[0].numpy()}')


As far as the model is concerned, the output layer should have 2 units instead of 1.

Hope I got it right (do not mind precision, test re-learning with 10 Epochs only) but predictions copy the peak at the end. Thank’s a lot Balaji!