C3W2 Lab 3 Training the model

I am trying to understand what is the BATCH_SIZE, and BUFFER_SIZE introduce in Lab3 in section Training the model?

BUFFER_SIZE = 10000

BATCH_SIZE = 64

Get the train and test splits

train_data, test_data = imdb_subwords[‘train’], imdb_subwords[‘test’],

Shuffle the training data

train_dataset = train_data.shuffle(BUFFER_SIZE)

Batch and pad the datasets to the maximum length of the sequences

train_dataset = train_dataset.padded_batch(BATCH_SIZE)

test_dataset = test_data.padded_batch(BATCH_SIZE)

print(train_dataset.take(1))

In addition, when trying different epoch 10 → 15. The loss is getting worse over the number of epoch. what is that?

Fluctuations can happen during training. When a NN is trained long enough, the loss will eventually reduce. Do share details about how loss varies with respect to epochs. See enable_op_determinism for better reproducibility across runs.

Please read the following links to learn about batch and shuffle#buffer_size