🔥FLAME - Foundation Models for AI in Life Sciences and Medicine |
||
Workshop in planning |
The healthcare industry is a prolific producer of data, with sources ranging from clinical trials data such as electronic health record (EHR) text data, health sensor data, medical imaging, patient-generated data and clinical reports to biomolecular data such as drug-molecule data, metabolomics data, nucleic acid sequences, proteins, gene expression data, and single-cell data.
However, much of these complex data are unstructured, incomplete, inconsistent, prone to errors, and heterogeneous, rendering traditional machine learning models less effective in analyzing it. To overcome such, foundation models (FM) such as large language models enable novel applications using such unstructured data, reducing the need for manual feature engineering or large volumes of unlabeled data for pretraining. Foundation models have the potential to revolutionize the clinical understanding, diagnosis, and treatment of complex medical conditions, while also improving the accuracy, and efficiency of healthcare systems. This technology can rapidly enhance patient outcomes, streamline processes, and reduce costs for healthcare providers and patients alike.
Recently, foundation models have been employed in a handful of healthcare tasks such as generating text, creating images from text descriptions, generating captions for images, and utilizing vision-language contrastive learning. Despite early signs of promise, the use of foundation models in healthcare remains largely underexplored and underutilized, limited to a few publicly available medical datasets, which raises concerns about generalization and robustness. Additionally, there is a lack of necessary empirical evidence to support its effectiveness.