Hey everyone! I was exploring the notebook in the first lab, and I was wondering: is the following command deterministic?
inputs = tokenizer(dialogue, return_tensors='pt')
output = tokenizer.decode(model.generate(
inputs["input_ids"],
max_new_tokens=50,
)[0],
skip_special_tokens=True
)
Meaning: given the same prompt (“dialogue”, in this case), will the tokenizer and the output generation be the same always, regardless of environments? Will that depend on the model, or on the tokenizer?
I’m not having particular problems following the course or the assignment for now, this is just curiosity.
Thanks,
Silvia.