Processor and Tokenizer

I have seen AutoProcessors and AutoTokenizers while training any transformer models, Both requires pretrained model to perform operations.
As far as my knowledge one can be used to process dataset and one is used to learn weights from pretrained model. Can anyone please give some clarity about it.

The two utilities that you mention are in the context of the Huggingface Transformers library. Both are used to prepare the input to the selected model. In the case of the AutoTokenizer, this is used for models like Bert, Bloom, and others, where the input is typically text. Then the AutoProcessor, which I really don’t have experience with, is more oriented towards multi-modal models that requires the input to be processed as an object that the model will consume.