Hello,
I have a few general questions on the code for week 1, in particular Lab 2.
-
In the Transform section, per my understanding, the code transforms the original image files into 32-bit float tensors for I/O. The input becomes a 28x28x1 tensor while the output is a single float32 tensor?
-
The Tuner section says that in the previous section the Transform component had saved transformed examples into TFRecords in compressed .gz format. If the inputs are already transformed, why exactly do we still need the transform graph, tf_transform_output?
-
Just want to make sure that I understand the dataloading going on here. In the tuner_fn function, we load up the images using _input_fn where we pass in the file paths of the data as well as the transform graph. Within _input_fn, since the default batch_size is set to 32, are we basically loading 32 images at a time? It looks like the 32 images are processed through the transform graph afterward?
-
Is there any kind of standardization or normalization happening in pre-processing before training? I’m not seeing it as I would usually expect to see something like this when training using images.
-
It looks like after the Trainer component is finished training, it saves the outputs in a .pb file. Was this specified anywhere or did this happen by default? Is there a way to set the filename? Finally, what format is this .pb file? Is it a SavedModel, frozen graph, or something else?
Thanks in advance.