On the ending of the course

In terms of how you represent the data in your chess example, it doesn’t matter, as long as you are consistent. Meaning that there would be no difference in how the algorithm learns and the only difference might be in the amount of storage and memory space that is required. There’s an interesting example in DLS C1 to do with unrolling or “flattening” 2D RGB images into vectors in order to feed them to Logistic Regression or a Feed Forward Neural Network: there are two potential orders for the flattening which are “C” order and “F” order. It turns out the accuracy of the model is the same with either orientation of the data, as long as you are consistent in how you do it. Here’s a thread which discusses that, but note that you have to read all the way to the end to see the discussion of the points I mentioned above.

As to wanting to practice what you’ve learned, it just turns out this course is not a programming course. You can “hold that thought” on the things you learned here and watch for situations in which you can apply those ideas as we move on through Course 4 (ConvNets) and Course 5 (Sequence Models).

1 Like