Trax does not use any particular model to create word embedding (like CBOW). Trax Embedding layer is mapping (assigning) a vector of values (initially random) to a particular number (eg. word token) and upgrading them through gradient descent in line with loss function.
You could find this post useful.
When you create Trax Embedding layer, you specify the number of tokens (this is number of rows - the number of tokens you will use, usually size of the vocabulary plus some special tokens) and the Embedding layer dimension (this is number of columns - how many values each vector should have). Initially these values are random. Then, when you train your model, these values are updated according to the loss function you specified - if your model guessed wrong or right, these values are updated - lowered or increased with varying magnitude accordingly. After your training you get your weight values for the Embedding layer (and also other layers).