Show simple item record

Explainable AI in Medical Imaging: Interpreting Multi-Modality Inference with Neuroimaging and EHR

dc.contributor.advisorLandman, Bennett A
dc.creatorKerley, Cailey Irene
dc.date.accessioned2023-01-06T21:26:29Z
dc.date.available2023-01-06T21:26:29Z
dc.date.created2022-12
dc.date.issued2022-11-07
dc.date.submittedDecember 2022
dc.identifier.urihttp://hdl.handle.net/1803/17885
dc.description.abstractMedical image processing is the art and science of extracting clinically meaningful information from medical images. One exciting facet of this field is multi-modal modeling: combining various sources of medical data into a singular model of a disease. These different data sources, from imaging modalities such as magnetic resonance imaging (MRI) to electronic health records (EHR), each contain a unique piece of the patient story; fusing these heterogeneous sources allows imaging models to consider the whole person when creating predictions. This growing area of research has many possible clinical applications but currently faces several challenges. Limited data availability in medical imaging produces models that are biased and difficult to generalize; accumulating multiple data sources for multi-modal modeling further restricts data availability and may heighten these biases. Additionally, many modeling techniques currently being employed suffer from the “black-box” problem: though models generate highly accurate predictions, the complex decision-making process that precedes the prediction is difficult or sometimes impossible to translate for human understanding. This work investigates model interpretability as an important component for addressing limited data settings and providing explanations for predictions. We first introduce several innovations in interpretable traditional machine learning for both neuroimaging and EHR, including adapting big data analysis methods to limited multi-modal data settings, translating a visually interpretable machine learning framework to multi-modal analysis, innovating in big EHR data performance and scalability, and extending the interpretability of EHR models. Next, we investigate deep learning models as an interpretable manifold embedding method for medical data. We propose a computationally efficient unsupervised optimization technique and demonstrate that it produces interpretable manifold embeddings of both neuro-MRI and EHR, which may be used for secondary classification and regression tasks. Finally, this work culminates in the development of an interpretable framework for multi-modal modeling of neuro-MRI and EHR. Our proposed pipeline identifies clusters in clinical EHR and explores how those clusters are related to differences in brain structure via MRI. Together, this work expands upon the growing field of model interpretability and contributes novel methodologies for multi-modal limited-data medical inference.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectEHR
dc.subjectMRI
dc.subjectMedical Imaging
dc.subjectAI
dc.titleExplainable AI in Medical Imaging: Interpreting Multi-Modality Inference with Neuroimaging and EHR
dc.typeThesis
dc.date.updated2023-01-06T21:26:29Z
dc.type.materialtext
thesis.degree.namePhD
thesis.degree.levelDoctoral
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorVanderbilt University Graduate School
dc.creator.orcid0000-0002-0866-8617
dc.contributor.committeeChairLandman, Bennett A


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record