Interpretable and Interactive Representation Learning on Geometric Data Open Access

Yuyang Gao (Fall 2022)

Permanent URL: https://etd.library.emory.edu/concern/etds/k930bz47x?locale=en
Published

Abstract

In recent years, representation learning on geometric data, such as image and graph-structured data, are experiencing rapid developments and achieving significant progress thanks to the rapid development of Deep Neural Networks (DNNs), including Convolutional Neural Networks (CNNs) and Graph Neural Networks (GNNs). However, DNNs typically offer very limited transparency, imposing significant challenges in observing and understanding when and why the models make successful/unsuccessful predictions.  While we are witnessing the fast growth of research in local explanation techniques in recent years, the majority of the focus is rather handling ''how to generate the explanations'', rather than understanding ''whether the explanations are accurate/reasonable'', ''what if the explanations are inaccurate/unreasonable'', and ''how to adjust the model to generate more accurate/reasonable explanations''. 

To explore and answer the above questions, this dissertation aims to explore a new line of research called `Explanation-Guided Learning' (EGL) that intervenes the deep learning models' behavior through XAI techniques to jointly improve DNNs in terms of both their explainability and generalizability. Particularly, we propose to explore the EGL on geometric data, including image and graph-structured data, which are currently under-explored~\cite{hong2020human} in the research community due to the complexity and inherent challenges in geometric data explanation.

To achieve the above goals, we start by exploring the interpretability methods for geometric data on understanding the concepts learned by the deep neural networks (DNNs) with bio-inspired approaches and propose methods to explain the predictions of Graph Neural Networks (GNNs) on healthcare applications. Next, we design an interactive and general explanation supervision framework GNES for graph neural networks to enable the ``learning to explain'' pipeline, such that more reasonable and steerable explanations could be provided.  Finally, we propose two generic frameworks, namely GRADIA and RES, for robust visual explanation-guided learning by developing novel explanation model objectives that can handle the noisy human annotation labels as the supervision signal with a theoretical justification of the benefit to model generalizability.

This research spans multiple disciplines and promises to make general contributions in various domains such as deep learning, explainable AI, healthcare, computational neuroscience, and human-computer interaction by putting forth novel frameworks that can be applied to various real-world problems where both interpretability and task performance are crucial.

Table of Contents

Contents

1 Introduction 1

1.1 Research Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 Interpretable and Efficient Bio-inspired Deep Learning via Neu-

ronal Assemblies . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.1.2 Interpretation for Dynamic Attributed Graphs via Hierarchical

Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.1.3 Explanation-Guided Representation Learning on Geometric Data 7

1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Interpretable and Efficient Bio-inspired Deep Learning via Neuronal

Assemblies 12

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.2 Biologically-Enhanced Artificial Neuronal Assembly Regularization . 15

2.2.1 Layer-wise Neuron Co-activation Divergence . . . . . . . . . . 15

2.2.2 The First-Order Layer-wise Neuron Correlation . . . . . . . . 17

2.2.3 The Second-Order Layer-wise Neuron Correlation . . . . . . . 20

2.3 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3.1 The Interpretable Patterns of BEAN Regularization . . . . . . 23

2.3.2 Learning Sparse and Efficient Networks . . . . . . . . . . . . . 28

2.3.3 Towards few-shot learning from scratch with BEAN regularization 31

2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3 Interpretation for Dynamic Attributed Graphs via Hierarchical At-

tention 38

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2.1 Online Health Communities Analysis . . . . . . . . . . . . . . 43

3.2.2 Dynamic Graph Representation Learning . . . . . . . . . . . . 43

3.2.3 Hierarchical Attention Mechanism . . . . . . . . . . . . . . . . 44

3.2.4 Neural Encoder-Decoder Models . . . . . . . . . . . . . . . . . 45

3.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.3.1 User Forum Activities as a Dynamic Graph . . . . . . . . . . 45

3.3.2 Learning Sequence from Dynamic Graph . . . . . . . . . . . . 47

3.4 Dynamic Graph-To-Sequence Model . . . . . . . . . . . . . . . . . . . 48

3.4.1 The DynGraph2Seq framework . . . . . . . . . . . . . . . . . 48

3.4.2 Dynamic Graph Encoder . . . . . . . . . . . . . . . . . . . . . 49

3.4.3 Sequence Decoder with Dynamic Graph Hierarchical Attention 53

3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.5.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . . 57

3.5.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.5.3 Interpretablity Analysis . . . . . . . . . . . . . . . . . . . . . 60

3.5.4 Health Stage Sequence Analysis . . . . . . . . . . . . . . . . . 63

3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4 Explanation-Guided Representation Learning on Geometric Data 66

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.2 EGL on Graph-Structured Data . . . . . . . . . . . . . . . . . . . . . 67

4.2.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.2.2 GNES Framework . . . . . . . . . . . . . . . . . . . . . . . . . 73

4.2.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

4.3 EGL on Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.3.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

4.3.2 GRADIA Framework . . . . . . . . . . . . . . . . . . . . . . . 97

4.3.3 RES Framework . . . . . . . . . . . . . . . . . . . . . . . . . . 104

4.3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5 Conclusions and Future Works 123

5.1 Research Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

5.1.1 Development of Interpretability Techniques for DNNs . . . . . 126

5.1.2 Explanation-Guided Learning on Graphs . . . . . . . . . . . . 127

5.1.3 Explanation-Guided Learning on Images . . . . . . . . . . . . 128

5.2 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

5.2.1 Published papers . . . . . . . . . . . . . . . . . . . . . . . . . 129

5.2.2 Submitted and In-preparation papers . . . . . . . . . . . . . . 131

5.3 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . 132

5.3.1 Explanation-Guided Learning on Medical Image Analysis . . . 132

5.3.2 Trustworthiness and Fairness of Deep Learning Explanation . 133

5.3.3 Contrastive Explanation-Guided Learning . . . . . . . . . . . 134

5.3.4 Interactive Explanation-Guided Learning pipeline on Continual

& Active Learning . . . . . . . . . . . . . . . . . . . . . . . . 134

Appendix A Explanation-Guided Representation Learning on Geomet-

ric Data 136

A.1 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

A.2 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

A.3 Human Annotation and Evaluation UI demonstration . . . . . . . . . 138

A.4 Efficient Adaptive Threshold Searching Algorithm . . . . . . . . . . . 139

A.5 Detailed Implementation of the Learnable Imputation Layers . . . . . 139

Bibliography 142

About this Dissertation

Rights statement
  • Permission granted by the author to include this thesis or dissertation in this repository. All rights reserved by the author. Please contact the author for information regarding the reproduction and use of this thesis or dissertation.
School
Department
Degree
Submission
Language
  • English
Research Field
Keyword
Committee Chair / Thesis Advisor
Committee Members
Last modified

Primary PDF

Supplemental Files