Argmax vs random-choice

In dinosaurus assignment we used: (while sampling)
np.random.choice to select a next letter that is likely , but not always the same.

while in Jazz assignment we used: (while predict and sampling)

indices = np.argmax(pred, axis=-1)
results = to_categorical(indices, num_classes=x_initializer.shape[-1])

which mean that the trained neural network will always pick the class with highest probability and thus we will always get same output.
How is this valid to pick the highest class while sampling?

If I recall correctly, I think Andrew discussed this in one of the lectures.

I have seen the videos, yes prof Andrew talked about using np.random.choice when, we want to pick most probable choice but not always the same, this will give us different outcome everytime we predict.

Same should be while generating music but used np.argmax(), I don’t remember him telling anything about using argmax until week3.

But if it is that I missed some part I will revise it once. Can you tell me which lecture you are referring to?