Crowdsourcing and Semi Supervised Learning for Detection and Prediction of Hospital Acquired Pressure Ulcer Injury Open Access

Sotoodeh, Mani (Summer 2021)

Permanent URL: https://etd.library.emory.edu/concern/etds/3n204050h?locale=en%5D
Published

Abstract

Pressure ulcer injury (PUI) or bedsore is “a localized injury to the skin and/or underlying tissue due to pressure.” More than 2.5 million Americans develop PUI annually, and the incidence of hospital-acquired PUI (HAPUI) is around 5% to 6%. Bedsores are correlated with reduced quality of life, higher mortality and readmission rates, and longer hospital stays. The Center for Medicare and Medicaid considers PUI as the most frequent preventable event, and PUIs are the 2nd most common claim in lawsuits. The current practice of manual quarterly assessments for a day to estimate PUI rates has many disadvantages including high cost, subjectivity, and substantial disagreement among nurses, not to mention missed opportunities to alter practices to improve care instantly. The biggest challenge in HAPUI detection using EHRs is assigning ground truth for HAPUI classification, which requires consideration of multiple clinical criteria from nursing guidelines. However, these criteria do not explicitly map to EHRs data sources. Furthermore, there is no consistent cohort definition among research works tackling HAPUI detection. As labels significantly impact the model’s performance, inconsistent labels complicate the comparison of research works. Multiple opinions for the same HAPUI classification task can remedy this uncertainty in labeling. Research works on learning with multiple uncertain labels are mainly developed for computer vision. Unfortunately, however, acquiring images from PUIs at hospitals is not standard practice, and we have to resort to tabular or time-series data. Finally, acquiring expert nursing annotations for establishing accurate labels is costly. If unlabelled samples can be utilized, a combination of annotated and unlabelled samples could yield a robust classifier. To overcome these challenges, we introduce the following: 1) Proposing a new standardized HAPUI cohort definition applicable to EHR data loyal to clinical guidelines; 2) A novel model for learning with unreliable crowdsourcing labels using sample-specific perturbations, suitable for sparse annotations of HAPUI detection (CrowdTeacher); 3) Exploration of unstructured notes for enhancement and gleaning better feature representations for HAPUI detection; 4) Incorporating unlabelled data into HAPUI detection via semi-supervised learning to reduce annotation costs.

Table of Contents

Abstract

Introduction

Background and Related Work

Truth Inference in Crowdsourcing Truth Inference Definition Truth Inference Methods Learning in the Context of Crowdsourcing Synthetic Data Generation Selective Gradient Propagation and Co-teaching Self-training, a Semi-Supervised Learning Paradigm

A Standardized HAPUI Cohort Compatible with Clinical Guidelines

PUI Terminology in EHR Challenges in PUI Cohort Definition Experimental Settings for Cohort Comparison Feature Construction Classifiers and Evaluation Metrics Defining Training and Test Sets in the Cohorts Results for Cohorts Comparison Cohort Definition's Impact on Classifiers' Performance for HAPUI Detection Significant Features for Classifiers and Cohorts Cohort Definition for HAPUI Detection, Next steps and Reflections

CrowdTeacher: Robust Co-teaching with Noisy Sparse Answers and Sample-specific Perturbations

Crowdsourcing: Toward Labeling Unlabelled Data Notations for Learning with uncertain Crowdsourcing Labels Uncertainty-aware Perturbation Scheme and Modifying Co-Teaching CrowdTeacher Experimental Settings Baseline Methods Annotation Simulation Datasets CrowdTeacher Results Synthetic Dataset PUI Dataset Length of Stay Dataset CrowdTeacher and HAPUI Detection

Discovering Better Features and Improving Performance Using Unstructured Notes

Text features, Missing Piece For HAPUI Detection Dataset and Labeling Details Data Analysis Negation Detection in Text Data Transforming Text into Vectorized Features PUI Classifiers Experimental Setup Experiments Overview and Data Split Evaluation Metric Inferring Word Significance from Feature Importance Results and Discussion Impact of Negation Detection on AUC and F1 Score Classifiers Performance Comparison Conclusion and Potential for HAPUI detection using Text

Leveraging Unlabeled Samples for HAPUI Detection

Our model Self-training for Extremely Unbalanced Classes Algorithm Details Experimental Settings Results for HAPUI Detection Results for LOS Prediction Conclusion

About this Dissertation

Rights statement
  • Permission granted by the author to include this thesis or dissertation in this repository. All rights reserved by the author. Please contact the author for information regarding the reproduction and use of this thesis or dissertation.
School
Department
Degree
Submission
Language
  • English
Research Field
Keyword
Committee Chair / Thesis Advisor
Committee Members
Last modified

Primary PDF

Supplemental Files