Hi there,
hand-crafted feature means that you incorporate your domain knowledge into your modelling with the features as model input. This could mean that you:
- apply some mathematical operations like transformations (e.g. Fourier transform if you have steady states and an oscillating system) or just simple addition / multiplication / raise to higher power etc.
- or you could even use more sophisticated models whose output can serve as input for your ML model
What is better for you really depends on the task you want to solve. Here you can find an example where a simple model might be sufficient to model the expected value if a time series problem: Bias and variance , tradeoff - #2 by Christian_Simonis
Models with hand crafted features are often powerful in my experience if you have dimensional spaces of <18 and quite good domain knowledge (in your features) with at least some moderate amount of data to satisfy your model needs.
In object detection you also have some classic models like these from OpenCV: OpenCV: Other tutorials (ml, objdetect, photo, stitching, video)
But in my understanding, hand-crafted features come to its limits, especially if you want to reuse a model (e.g. with transfer learning) in a slightly different context or application scenarios.
So deep learning (DL) is a very powerful way for object detection: you have (due to the characteristics of a picture or video) highly dimensional spaces and tons of good training data. Also the right model architectures were developed in the past years to solve this kind of task. DL models basically take care of „feature engineering“ implicitly in their more sophisticated layers in order to optimise the underlying cost function. Good thing is: you will learn all this in the DLS specialisation.
Long story short: in object detection I am not aware of a robust and highly scalable solution which is purely based on classic ML methods with hand-crafted features AND outperforms DL-based models.
Best
Christian