I’m currently watching the lecture introducing the architecture of the CBOW model.
Younes explains that the neural network has 3 layers: input layer, hidden layer, and output layer.
However, the weights are defined as values that sit in between layers, resulting in two sets of weights (and bias and activation functions).
It’s been a while since I followed the Deep Learning specialization, but I seem to recall that here a layer of a NN was defined as the weights, biases, and activation function.
Therefore I’m wondering if, following this view, the CBOW model could actually be seen as a neural network with two layers only.
Any thoughts on the matter?