We are graduating to the glorious world of TF and Keras here, which we have visited earlier at Week 3 of Course 2 and in most of Course 4. You can see what the input data looks like by checking the earlier cell that loads it and the prints the shapes of everything. At that point, everything is stored in numpy arrays with the appropriate number of dimensions. It turns out that in Keras, the “samples” dimension is implicit. So when you give the “shape” of an input tensor, you give the shape of one sample and it assumes you’ll be handing it batches or minibatches of tensors of that shape and that the “samples” dimension is the first dimension when you actually feed it data. As to how the vectorization is handled at the internal layers of Keras, we don’t really know. The assumption is that they are doing it in the most efficient way that they can and using GPUs or TPUs if those are made available to it. TF/Keras are open source, so you can go as deep as you want in terms of figuring out how things really work internally. I have not yet tried to “go there”, so I’m not sure how easy they make it to get to the official source.
The other thing to do if you want to know more is to have a look at the TF/Keras documentation. E.g. here’s the page for the Keras Input function. If you poke around on that website, you can also find higher level tutorial style articles as well. Here’s a great thread on our local forums about how to use the Keras Sequential and Functional APIs.