All soft tokens are shown as x1, are they all same vector?

It is mentioned that “With prompt tuning, you add additional trainable tokens to your prompt and leave it up to the supervised learning process to determine their optimal values. The set of trainable tokens is called a soft prompt, and
it gets prepended to embedding vectors that represent your input text.”.
In the slide all soft tokens are shown as x1, so are they all same as the first actual token because the first language token is also shown as x1?
If those are different, it will be better to use some other symbol to denote those, like (say) s1, s2, etc.