I need some clarification on vector x

Does \vec{x} always have to be a 1-d vector? I am sorry for asking about it, but i am confused (I have watched the first few videos and I am wondering if it could be a matrix itself.

More particularly, regarding the video where one of the first layer’s input was the price of the product, could we feed an algorithm not only with the last price of the product, but rather feed it with a few of its past values? And do that for all different features?

I hope I am understood.

1 Like

Hello @Kosmetsas_Tilemahos

\vec{x} can be a 2-d vector having shape (m,n). This represents m samples and n distinct features.

Regarding your comment: “could we feed an algorithm not only with the last price of the product, but rather feed it with a few of its past values?”

We should not make the assumption that each sample/row represent the state of the data w.rt time - Unless a time reference is given in the data, we are not at liberty to assume that the first row represents the latest set of data and the last row represents the oldest set of data OR Vice versa.

If there is no reference of time given in the data, we can only say that for the input values of x_1^{(m)},x_2^{(m)},...x_n^{(m)} -> the output takes the value of y^{(m)}, where the superscript (m) only represents the row number of the data.

Note that methods that use time sequences are a lot more complicated, they’re covered in most “deep learning” courses.

1 Like