During Week 2 in the AI for Good Framework, it would have been great if we could see an example message and the labels used, be informed about what model was used, and be told exactly how many training records were used.
He mentions “a single layer model” and that is about it.
Would also be great to see how the actual end-to-end system enables clinic staff to label or verify labeled data, etc.
I don’t think revealing these details, which could be done in just two sentences, would overwhelm anyone. Most importantly, it would have been great to be able to see the system, such as screenshots, so we can actually see the solution that was implemented, rather than vague hand-waving. Seeing the concrete implementation would help all participants really “see” what the solution actually is, rather than just listening to Robert talk about it.
As @aryan010204 mentioned, we didn’t want to overwhelm people new to AI. At the same time, keeping the intermediate/advanced learners in mind, everything is available in the utils.py files for the keen learners. You can take a look at all of the code by accessing it by doing File --> Open...