I know I learned that PCA was good for dimensionality reduction.
I also know I learned that PCA also improved accuracy/performance of a model, giving better model results.
But I guess it no longer does? I assume the newest algorithms don’t need it any more? when did this happen? with which algorithms was PCA useful, and not useful any more?
Actually these are both the same concept. Reducing the number of features can make a model work better or be easier to train. For example, as it takes a lot longer to train an image classifier with fully populated images, rather than an image that has had redundant features removed (i.e. all of the same-color pixels around the border).
PCA lost some of its utility when computers got faster, memory got cheaper, and GPUs became common for accelerating training. Efficiency seems to be less important now, instead of rapid development and use of standard pre-trained models.