What exactly does this command do. model2.layers[4]

If you have never read through the overview of TensorFlow and Keras Layers and Models, you should do.

Basically, a Layer is a unit of computation. It takes in one or more Tensor inputs and performs some kind of operation or transformation. This operation is performed inside the Layer’s call() function.

https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer

Keras Model groups Layers into an object that has training and inference behaviors. It has an attribute that is a collection of Layers, named, appropriately enough, layers.

https://www.tensorflow.org/api_docs/python/tf/keras/Model

You can reference a particular layer in a model by performing an indexed lookup on the model’s layers as in a_layer = my_model.layers[index]. What you get back from that lookup is guaranteed to be of type Layer. In the Keras class hierarchy class Model inherits from class Layer, in object-oriented lingo a Model is-a Layer, so that Layer you get back could also be of type Model. It looks like that is the assumption here. Basically you’re leveraging the class hierarchy to define a model using layers, at least one of which is itself a model.

It might make sense to define a simple, or base_model, then extend its behavior by stacking it with other Layer instances. For example, if you always grouped Conv2D, Pooling, and ReLU, you could define a model with those three layers, the stack them repeatedly as one object. There are some existing threads that have examples of this kind of stacking. Here is one:

I had a model that performed sentiment analysis. It required a specific kind of encoded input that was not at all human readable. I built a new model by putting a text encoding layer in front of it. In that case, my ‘base’ model would have been accessible through extended_model.layers[1]

HTH