Prompt tuning using transformers library

I asked this question in the course discussion forum to provide some guidance on how to perform prompt tuning since the labs don’t have any examples of it. The answer was that there are some example we could find online, such as HuggingFace transformers library.
Here is one example of a configuration for Prompt tuning I found there:

config = PromptEncoderConfig(
    peft_type="P_TUNING",
    task_type="SEQ_2_SEQ_LM",
    num_virtual_tokens=20,
    token_dim=768,
    num_transformer_submodules=1,
    num_attention_heads=12,
    num_layers=12,
    encoder_reparameterization_type="MLP",
    encoder_hidden_size=768,
)

When I create a model using this configuration:

prompt_model = get_peft_model(original_model,config)

I get the following warning:

/databricks/python/lib/python3.8/site-packages/peft/tuners/p_tuning.py:146: UserWarning: for MLP, the `encoder_num_layers` is ignored. Exactly 2 MLP layers are used.
  warnings.warn(

What is confusing is, if MLP is only using 2 layers and encode_num_layers parameter is ignored, why is it given in the example above?

Great, This week’s assignment just talk about the LORA. I have been looking forward those technic for a while. Thank you very much