explAInation
The primary objective of this project is to investigate a system architecture and methodologies for deriving explanations from CNN models to reach a self-explanatory, human-comprehensible neural network. We will address the use case of detecting dementia and mild cognitive impairment due to Alzheimer’s disease and frontotemporal lobar degeneration in magnetic resonance imaging (MRI) data.
Accuracy
Machine learning models must be accurate to be able to provide reasonable explanations.
We validate all models in independent datasets to test their performance.
Focus
Machine learning models need to focus on key features that are causally related to the target and often known a priori.
Relevance maps help us to assess the level of noise and to detect bias in the training data.
Explanation
Machine learning models should provide an explanation describing the decision-making process in an intuitive way.
Our goal is to develop a system architecture that can extract relevant information and dynamically synthesize descriptive explanations.
Deep learning InteractiveVis
In 2020, Martin Dyrba developed a convolutional neural network architecture to detect Alzheimer’s disease in MRI scans. The diagnostic performance was validated in three independent cohorts.
From the neural networks, we can derive relevance maps that indicate the brain areas with high contribution to the diagnostic decision. Medial temporal lobe atrophy was shown as most relevant area which matched our expectations, as hippocampus volume is actually the best established neuroimaging marker for Alzheimer’s disease.