Interpretable Brain Network Analysis with Graph Neural Networks Open Access

Dai, Wei (Spring 2022)

Permanent URL: https://etd.library.emory.edu/concern/etds/9880vs186?locale=en%5D
Published

Abstract

Human brains are at the center of complicated neurobiological systems in which neurons, circuits, and systems communicate in mysterious ways. Understanding the brain’s structure and functional processes have long been a fascinating topic of study in neuroscience and clinical disease treatment. One of the most often used paradigms in neuroscience is network mapping of the human brain’s connections. Graph Neural Networks (GNNs) have lately gained popularity as a tool for representing complicated network data. However, as a deep model, graph neural networks have limited interpretability. In healthcare, decisions are often critical, and it is hard for researchers to trust the model if the model is not explainable. To allow effective use of deep models in healthcare, we present an interpretable model IBGNN for analyzing disorder-specific salient areas of interest and significant linkages.

Another obstacle to the wide use of GNNs in brain network analysis is the difficulty of performance tuning and comparisons. There has not been a systematic study of how different designs of brain networks will affect the performance of GNNs for brain networks. To tune the interpretable model we made, we present BrainGB, a benchmark for brain network analysis with GNNs. We modularize the implementation designs so that different variants of GNNs can be tested. We use the designed framework to conduct extensive experiments and summarize the best practices in GNN designs for brain networks. To support the development of brain network analysis, we host a website at https://brainnet.us/ with models, tutorials and examples. We maintain an open-source framework for GNN testing and design on brain networks, which is also available on the website. We anticipate that this research will offer valuable empirical evidence as well as insights for future research in this exciting new field.

GNNs are known to have defects like over-smoothing and over-squashing. To further improve the performance of the interpretable model, we further present a transformer-based deep model, specifically designed for brain network analysis. To utilize the clustered nature of the brain network, we add a differential pooling layer, which provides enhanced performance and potential interpretability. 

Table of Contents

1 Introduction 1

1.1 Problem Definition ............................ 5

2 Background 6

2.1 Brain Networks .............................. 6

2.2 Generic GNN Models........................... 7

2.2.1 Spectral Graph Neural Networks ................ 7

2.2.2 Spatial Graph Neural Networks ................. 9

2.3 Brain Specific GNN Models ....................... 10

2.3.1 BrainNetCNN........................... 10

2.3.2 BrainGNN............................. 11

3 Our Models 13

3.1 Interpretable GCN and GAT based Model . . . . . . . . . . . . . . . 13

3.1.1 The Backbone Prediction Model................. 13

3.1.2 The Explanation Generator ................... 14

3.1.3 The Overall Framework ..................... 16

3.2 Brain Transformer ............................ 17

3.2.1 Model Structure.......................... 17

4 Benchmarks and Model Optimizations 20

4.1 Node Feature Construction........................ 21

4.2 Message Passing Mechanisms ...................... 22

4.3 Attention-Enhanced Message Passing.................. 24

4.4 Pooling Strategies............................. 26

4.5 Datasets.................................. 28

4.6 Experimental Analysis and Insights................... 29

4.6.1 Performance Report ....................... 31

5 Results and Interpretation Analysis 35

5.1 Experiment Results of Explainable GNN Networks . . . . . . . . . . 35

5.1.1 Datasets and Preprocessings................... 35

5.1.2 Compared Methods........................ 36

5.1.3 Prediction Performance...................... 36

5.2 Experiment Results of Brain Transformer. . . . . . . . . . . . . . . . 37

5.2.1 Performance............................ 37

5.3 Neural System Mapping ......................... 38

5.3.1 Salient ROIs............................ 38

5.3.2 Edges ............................... 40

6 Conclusion 42

Appendix A

Appendix 44

A.1 Implementation details.......................... 44

A.2 Ethical Statement............................. 45

A.3 Collaborations............................... 45

Bibliography 46 

About this Honors Thesis

Rights statement
  • Permission granted by the author to include this thesis or dissertation in this repository. All rights reserved by the author. Please contact the author for information regarding the reproduction and use of this thesis or dissertation.
School
Department
Degree
Submission
Language
  • English
Research Field
Keyword
Committee Chair / Thesis Advisor
Committee Members
Last modified

Primary PDF

Supplemental Files