Question Answering with User Generated Content Public

Savenkov, Denis (2017)

Permanent URL: https://etd.library.emory.edu/concern/etds/pv63g111k?locale=fr
Published

Abstract

Modern search engines have made dramatic progress in answering many user questions, especially about facts, such as those that might be retrieved or directly inferred from a knowledge base. However, many other more complex factual, opinion or advice questions, are still largely beyond the competence of computer systems. For such information needs users still have to dig into the ``10 blue links'' of search results and extract relevant information. As conversational agents become more popular, question answering (QA) systems are increasingly expected to handle such complex questions and provide users with helpful and concise information.

In my dissertation I develop new methods to improve the performance of question answering systems for a diverse set of user information needs using various types of user-generated content, such as text documents, community question answering archives, knowledge bases, direct human contributions, and explore the opportunities of conversational settings for information seeking scenarios.

To improve factoid question answering I developed techniques for combining information from unstructured, semi-structured and structured data sources. More specifically, I propose a model for relation extraction from question-answer pairs, the Text2KB system for utilizing textual resources to improve knowledge base question answering, and the EviNets neural network framework for joint reasoning using structured and unstructured data sources. Next, I present a non-factoid question answering system, which effectively combines information obtained from question-answer archives, regular web search, and real-time crowdsourcing contributions. Finally, the dissertation describes the findings and insights of three user studies, conducted to look into how people use dialog for information seeking scenarios and how existing commercial products can be improved, e.g., by responding with certain suggestions or clarifications for hard and ambiguous questions.

Together, these techniques improve the performance of question answering over a variety of different questions a user might have, increasing the power and breadth of QA systems, and suggest promising directions for improving question answering in a conversational scenario.

Table of Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . 7

2 Background and Related Work . . . . . . . . . . . . . . . . . . 9

2.1 Factoid Question Answering . . . . . . . . . . . . . . . . . 10

2.1.1 Text-based Question Answering . . . . . . . . . . . . . 10

2.1.2 Knowledge Base Question Answering . . . . . . . . . . . 13

2.1.3 Information Extraction . . . . . . . . . . . . . . . . . 18

2.1.4 Hybrid Question Answering . . . . . . . . . . . . . . . 20

2.2 Non-factoid Question Answering . . . . . . . . . . . . . . . 23

2.3 Crowdsourcing for Question Answering . . . . . . . . . . . . 26

2.4 Interactions between Users and QA Systems . . . . . . . . . 28

2.4.1 User Assistance in Information Retrieval . . . . . . . . 28

2.4.2 Conversational Search and Question Answering . . . . . . 29

3 Combining Data Sources for Factoid Question Answering . . . . . 33

3.1 Relation Extraction from Question-Answer Pairs . . . . . . . 35

3.1.1 Relation Extraction Models . . . . . . . . . . . . . . . 37

3.1.2 Experiments . . . . . . . . . . . . . . . . . . . . . . 41

3.1.3 Analysis and Discussion . . . . . . . . . . . . . . . . .44

3.2 Text2KB: Augmenting Knowledge Base Question Answering

with External Text Data . . . . . . . . . . . . . . . . . . . . 46

3.2.1 Baseline Approach . . . . . . . . . . . . . . . . . . . 49

3.2.2 Text2KB Model . . . . . . . . . . . . . . . . . . . . . 53

3.2.3 Experimental Results . . . . . . . . . . . . . . . . . . 59

3.2.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . 65

3.3 EviNets: Joint Model for Text and Knowledge Base Question

Answering . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.3.1 Model and Architecture . . . . . . . . . . . . . . . . . 72

3.3.2 Experimental Evaluation . . . . . . . . . . . . . . . . 75

3.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . 81

3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 82

4 Improving Non-factoid Question Answering . . . . . . . . . . . . 83

4.1 Ranking Answers and Web Passages for Non-factoid Question

Answering . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.1.1 Candidate Answer Generation . . . . . . . . . . . . . . 87

4.1.2 Candidate ranking . . . . . . . . . . . . . . . . . . . 88

4.1.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . 93

4.2 CRQA: Crowd-powered Real-time Automatic Question An-

swering System . . . . . . . . . . . . . . . . . . . . . . . . . 96

4.2.1 Evaluating crowdsourcing for question answering . . . . 97

4.2.2 System Design . . . . . . . . . . . . . . . . . . . . . 105

4.2.3 Experiments . . . . . . . . . . . . . . . . . . . . . . 110

4.2.4 Analysis and Discussion . . . . . . . . . . . . . . . . 113

4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5 Conversational Question Answering . . . . . . . . . . . . . . . 120

5.1 Conversational Search With Humans, Wizards, and Chatbots . . 122

5.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . 122

5.1.2 Study design . . . . . . . . . . . . . . . . . . . . . . 124

5.1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . 127

5.1.4 Discussion and design implications . . . . . . . . . . . 132

5.2 Search Hints for Complex Informational Tasks . . . . . . . . 134

5.2.1 User Study . . . . . . . . . . . . . . . . . . . . . . . 135

5.2.2 Results and Discussion . . . . . . . . . . . . . . . . . 138

5.3 Clarifications in Conversational Question Answering . . . . 142

5.3.1 Dataset Description . . . . . . . . . . . . . . . . . . 143

5.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . 145

5.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . 150

5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 151

6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.1 Summary of the Results . . . . . . . . . . . . . . . . . . . 153

6.1.1 Combining KB and Text Data for Factoid Question

Answering . . . . . . . . . . . . . . . . . . . . . . . . . . 154

6.1.2 Ranking Answer Passages for Non-factoid Question An-

swering . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

6.1.3 Question Answering in Conversational Setting . . . . . . 157

6.2 Limitations and Future Work . . . . . . . . . . . . . . . . 158

6.3 Contributions and Potential Impact . . . . . . . . . . . . . 161

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

About this Dissertation

Rights statement
  • Permission granted by the author to include this thesis or dissertation in this repository. All rights reserved by the author. Please contact the author for information regarding the reproduction and use of this thesis or dissertation.
School
Department
Degree
Submission
Language
  • English
Research Field
Mot-clé
Committee Chair / Thesis Advisor
Committee Members
Dernière modification

Primary PDF

Supplemental Files