Hi,
I wish to understand the syntax (details of parameters that go inside) of transformers and tokenizers used in week 1 lab? I went to the source on github but still could not locate the specific parameters (methods) used that were used inside transformers and tokenizers.
Regards
Do you refer to the following lines?
model_name=‘google/flan-t5-base’
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
If not, can you please expand a bit on your question?
Thanks!
I am referring to:
- tokenizer(sentence, return_tensors=‘pt’) - what is return_tensors? where can I read about it?
2.tokenizer.decode(
model.generate(. - I would like to understand the full syntax of tokenizer and also model. I think they are defined as classes so would like to understand the parameters that go within their methods/functions which they invoke here: ‘.decode’ , ‘.generate’
Thanks
Thanks for the clarifications!
For tokenizer you can get started here:
For model you can get started here:
Both classes are coming from the Huggingface platform.
Hope this helps!