I’m at the middle of module 4, and I’m just wondering if we aim for 100% model assertiveness? I’m considering a fallback-to-human approach to the workflows I’m building. And this must also be evaluated. Anyone with experience with this scenario?
Hi rodgco,
Thanks for raising this important point.
I’d say the desired assertiveness of the model and its evaluation and control depends on the type and criticality of the application being built. The focus of this course lies on the way in which models can be used rather than on how humans can be kept in the loop. So in the course, much of the workflow is delegated to models that could also be left to humans or strict logical code.
It seems to me that this does not have to affect the metrics used to evaluate the overall system. The evaluation metrics of systems in which parts of the work flow are delegated to models can be compared with those in which those parts are left to humans or strict code.