Hi there,
if I understand your question correctly, you are asking if having a benchmark makes sense, especially considering computational limits of your embedded or edge device in the target deployment scenario.
Yes, there are cases where this makes sense! Especially then, if you need to make architectural decisions (e.g. are there functions to be executed on cloud or edge, when will new training be triggered, …etc.) also if you have the right data ready, this speaks in favour of doing so. Also if you have a potential upside of improving your business problem, e.g by higher accuracy or smaller uncertainty, this might be a reason. After all, you can assess then what seems to be technically possible and reasonable.
I am a big fan of having fair benchmarks. This can include in particular for time series prediction:
- a naive prediction (like a constant forecast or a linear model) as well as
- AutoML.
That being said, dependent on your problem, you want to solve, you can chose other appropriate benchmarks: these may even be domain models (e.g. Open CV models for computer vision) or e.g. also Deeplearning models as you suggested. Please bear in mind to consider efforts and costs, especially if data quantity is limiting.
Do you have an application or example in mind where you would use a DL model as benchmark?
Best
Christian