How to decide optimal values of hyperparameters for embedding layer(output vector dimension and max length) based on data?

In C3_W2_Lab_1_imdb , the parameteres used are:
vocab_size = 10000
max_length = 120
embedding_dim = 16

but in C3_W2_Lab_2_sarcasm_classifier, the max length is changed and the parameters used are:

vocab_size = 10000
max_length = 32
embedding_dim = 16

So how do we decide these parameter values? I understand that max_length is equal to input length used while padding the input vector but then again how do we fix a padding length and set an embedding dimension for a layer?

There is no fixed algorithm for arriving at the best value of embedding dimension. You have to try different values (popular architectures point towards powers of 2) and decide based on model performance.

Okay, that solves my doubt.