The labs do a nice job of visualizing the training data on a 2 dimensional x-y plot. In practice we often have many features that span far greater than 2 dimensions. Is there a way to visualize higher dimensionality data sets?
Not very easily, due to the limitations of the human vision system.
So how would one introduce polynomial feature transforms if you don’t know what your data looks like?
Note that PCA can be useful in summarizing a complex data set. It’s been added to Course 2, Week 3.
In addition to what TMosh said …you don’t want to Know what your data looks like to make regression or classification(if your feature more than 3).the regression algorithms tune the weights of the algorithms without needing to Know the graphs …note(there are difference between multiple feature and polynomial regression)
How can you know if your data is linearly separable if you can’t see it? Isn’t that the entire point of polynomial features (i.e., x_1x_2, x_1^2, etc)?
The problem is that the number of dimensions we’re dealing with here can run into the hundreds and even thousands or more. As Tom said, the human brain just isn’t evolved to deal with visualizing more than 3 dimensions very easily.
On the linear separability question, one alternative is to reason from the results. Try fitting the data with Logistic Regression and see what kind of results you get. If the accuracy is not very good, that probably means your data is not linearly separable. So then you try more complex alternatives: first try polynomial feature expansion and see if that helps. If it helps, but not enough, then you need to try neural networks next.
Thanks, that is what I was looking for!