Variance Reduction Methods for High-Dimensional Optimal Control Problems Público

Ting, Yujan (Spring 2022)

Permanent URL: https://etd.library.emory.edu/concern/etds/9306t0527?locale=es
Published

Abstract

We evaluate three variance reduction methods for solving high-dimensional optimal control problems: stochastic variance reduced gradient (SVRG), streaming stochastic variance reduced gradient (SSVRG), and stochastic recursive gradient (SARAH) algorithms. SVRG has recently been popular for finite-sum optimization; however, it is unclear whether SVRG can be applied to high-dimensional optimal control problems in real-time settings. We modify SVRG to solve the high-dimensional online optimization problem with an infinite dataset. SSVRG and SARAH are two variants of SVRG. While SSVRG is another streaming version of SVRG, SARAH leverages past stochastic gradient information to minimize variance. On several multi-agent collision avoiding problems, we fine-tune and compare the performance of SVRG, SSVRG, and SARAH with ADAM, a state-of-art algorithm. Our numerical experiments demonstrate the effectiveness of variance reduction methods for non-convex, high-dimensional optimal control problems. In particular, SARAH has the best performance in terms of convergence rate, sampling efficiency, and solution optimality. However, compared with SARAH, SVRG and SSVRG are less stable and relatively less adaptive to hyperparameter changes.

Table of Contents

1 Introduction

----1.1 Motivation

----1.2 Contributions and Outline

2 High-Dimensional Optimization 

----2.1 High-Dimensional Optimal Control

--------2.1.1 Overview

--------2.1.2 Optimal Control Problem

--------2.1.3 Multi-Agent Control Problem

--------2.1.4 Main Formulation

--------2.1.5 Numerical Implementation

----2.2 Numerical Optimization Algorithms 

--------2.2.1 Stochastic Gradient Descent (SGD)

--------2.2.2 Adaptive Learning Methods (ADAM)

--------2.2.3 Variance-Reduced Methods

----2.3 Online Optimization 

3 Variance Reduction Methods 

----3.1 Variance Reduction Basics 

----3.2 Stochastic Variance Reduced Gradient (SVRG) 

----3.3 Streaming Stochastic Variance Reduced Gradient (SSVRG) 

----3.4 Stochastic Recursive Gradient Algorithm (SARAH) 

----3.5 Implementation

4 Numerical Experiments 

----4.1 Experiment Set-Up 

----4.2 HyperParameter Optimization 

----4.3 Control Problems 

--------4.3.1 Corridor Experiment

--------4.3.2 2-agent Swap Experiment

--------4.3.3 12-agent Swap Experiment

5 Discussion 

6 Conclusion 

About this Honors Thesis

Rights statement
  • Permission granted by the author to include this thesis or dissertation in this repository. All rights reserved by the author. Please contact the author for information regarding the reproduction and use of this thesis or dissertation.
School
Department
Degree
Submission
Language
  • English
Research Field
Palabra Clave
Committee Chair / Thesis Advisor
Committee Members
Última modificación

Primary PDF

Supplemental Files