I’m currently watching the lecture introducing the architecture of the CBOW model.
Younes explains that the neural network has 3 layers: input layer, hidden layer, and output layer.
However, the weights are defined as values that sit in between layers, resulting in two sets of weights (and bias and activation functions).
It’s been a while since I followed the Deep Learning specialization, but I seem to recall that here a layer of a NN was defined as the weights, biases, and activation function.
Therefore I’m wondering if, following this view, the CBOW model could actually be seen as a neural network with two layers only.
If you refer back to DLS Course1 week3, from timestamp 3:30, you will hear Prof. Ng explains that technically, a model with input layer, 1 hidden layer and output layer construction is 3 layers model, but in practice, also as seen in research papers, the input layer doesn’t count.