How to increase the hidden dimension of an RNN

Looking at the documentation for SimpleRNN, it looks like in tensorflow the dimension of the output, input, and hidden state all have to match. When you have one dimensional input this means the hidden state would only be one dimensional, which feels like it can’t remember much history. One approach might be to stack several SimpleRNNs so that each RNN can keep track of one “state” number.

My question is what a good way increasing the dimension of hidden state is. I ended up writing something like this:

        tf.keras.layers.Lambda(lambda x: tf.multiply(tf.convert_to_tensor([1.0] + [0.0] * 9), x), output_shape=(WINDOW_SIZE, 10)),
        tf.keras.layers.SimpleRNN(10, return_sequences=True),
        tf.keras.layers.Dense(1)

This embeds the single dimension into a 10 dimensional space, and then the output and hidden state is 10 dimensional, and I can use a dense layer to reduce dimensionality. Is this a good approach, is there a better way?

Hello!
That’s a great question, and I understand your concern about the hidden state dimension in an RNN. Let’s clarify a few points.

Firstly, the hidden state dimension in an RNN, such as TensorFlow’s SimpleRNN, is determined by the units parameter. If you want a larger hidden state, simply set the units value higher. For instance, setting SimpleRNN(10) will give you a hidden state dimension of 10, enabling the network to capture more complex features from the data over time. There’s no need to manipulate the input data to increase this dimension—adjusting the units parameter is sufficient.

Your idea of adding zeros to the input to increase the dimension may not be the best approach. By doing this, you are effectively altering the input data, which can lead to unexpected results and not fully utilize the power of the RNN model. It is better to work directly with the network architecture to adjust dimensionality.

SimpleRNN Doc

If you have any further questions or need clarification, I am here to help.

1 Like