I’m exploring how to combine standard machine learning practices with large language models (LLMs) in the medical domain, particularly when working with large datasets.
Let’s say I have a new, unseen patient sample, and I want to predict the risk of heart disease. While an ML model like XGBoost could assess correlations (e.g., between age, sex, and cholesterol), there’s a wealth of broader medical knowledge that could improve the prediction.
One idea I’m considering is using in-context learning with an LLM. For instance, I could extract 100 similar examples from my dataset and use them as context in the LLM prompt. The LLM would then leverage both the dataset examples and general medical knowledge to provide a more informed risk assessment, potentially including explanations.
- How can this approach be effectively applied when working with large datasets?
- Are there strategies to optimize how LLMs use example-based prompts to enhance predictive accuracy and interpretability in medical applications?
- Is there any prior work covering this problematic?
I’d appreciate any insights on how to combine ML-driven insights with LLMs in this context, especially when handling vast datasets.