A Trainable Conjugate Gradient Method for Image Reconstruction Pubblico
Wu, Junyuan (Spring 2019)
Abstract
Deep learning has become an important tool in imaging classification, recognition, and recently in reconstruction. Image reconstruction is an ill-posed inverse problem, which is commonly solved by minimizing an objective function consisting of a data misfit term and a regularization term. Two key challenges in solving inverse problems are to design an effective regularization term and iterative solver.
This thesis presents a trainable Conjugate Gradient method that we call VNCG. Our method is obtained by following the framework of variational networks (VN), with the key idea of unrolling and training a CG method with fixed number of iterations. In our numerical experiments, we consider linear inverse problems and train a convolution stencil that represents the regularization operator in a Tikhonov form. We compare two strategies: using a constant stencil for all iterations or a more flexible approach that assigns different stencils to each iteration.
Table of Contents
1) Introduction..........1
2) Background..........5
2.1 Conjugate Gradient Method..........5
2.2 Trainable Regularization Operators..........8
2.3 Variational Network..........11
2.4 Training Convolution Operator..........13
3) Numerical Experiments..........14
3.1 Methods..........15
3.2 Tomographic Shepp-Logan Phantom Reconstruction..........19
3.3 General Image Deblurring..........26
3.4 Undersampled Tomography..........33
4) Summary and Conclusion..........38
About this Honors Thesis
School | |
---|---|
Department | |
Degree | |
Submission | |
Language |
|
Research Field | |
Parola chiave | |
Committee Chair / Thesis Advisor | |
Committee Members |
Primary PDF
Thumbnail | Title | Date Uploaded | Actions |
---|---|---|---|
A Trainable Conjugate Gradient Method for Image Reconstruction () | 2019-04-08 18:31:08 -0400 |
|
Supplemental Files
Thumbnail | Title | Date Uploaded | Actions |
---|