Probabilistic Zero-Knowledge Proofs for Neural Network Robustness Pubblico
Han, Xu (Spring 2025)
Abstract
As machine learning (ML) models are increasingly deployed in high-stakes domains,
ensuring their robustness has become critical. However, verifying these properties
often requires access to models’ secrets, which poses a privacy risk. This thesis
proposes a novel cryptographic approach that uses Zero-Knowledge Succinct Non-
Interactive Arguments of Knowledge (zk-SNARKs) to certify the robustness of neural
networks without revealing their internal parameters. Building on prior work such
as FairProof and GeoCert, we present a scalable and privacy-preserving method that
uses verifiable randomness and circuit-based forward propagation to simulate and
prove neural network behavior. Our approach introduces a probabilistic strategy that
avoids the computational complexity of exact robustness verification and supports
real-world scenarios. Experiments on synthetic multilayer perceptrons demonstrate
that our zk-SNARK-based system can distinguish between fair and unfair models
while maintaining constant-time verification and manageable proof sizes. This work
achieves a practical and scalable method for zero-knowledge verification of machine
learning models, enabling secure and privacy-preserving model validation.
Table of Contents
Contents
1 Introduction 1
2 Background 3
2.1 zk-SNARK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Things Behind the Scenes . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.1 Writing the Circuit . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 Arithmetic Circuit . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.3 Introduction to Rank-1 Constraint System (R1CS) . . . . . . 6
2.2.4 Example of R1CS . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.5 Trusted Setup and Groth16 in zk-SNARK . . . . . . . . . . . 9
2.3 Robustness of Machine Learning Model . . . . . . . . . . . . . . . . . 11
2.4 FairProof and GeoCert . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Randomness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Approach 15
3.1 Public and Private Inputs in the Proof System . . . . . . . . . . . . . 16
3.2 Verifiable Randomness . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Inputs Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Real World Scenario Simulation . . . . . . . . . . . . . . . . . . . . . 19
3.5 Forward Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Number Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . 23
i
4 Experiments 25
4.1 Fairness Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Feasibility Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5 Conclusion 29
A Source Code 31
A.1 Code For Verifiable Randomness: . . . . . . . . . . . . . . . . . . . . 31
A.2 Code for Forward Propagation and Rounding . . . . . . . . . . . . . 33
Bibliography 36
About this Honors Thesis
- Permission granted by the author to include this thesis or dissertation in this repository. All rights reserved by the author. Please contact the author for information regarding the reproduction and use of this thesis or dissertation.
| School | |
|---|---|
| Department | |
| Degree | |
| Submission | |
| Language |
|
| Research Field | |
| Parola chiave | |
| Committee Chair / Thesis Advisor | |
| Committee Members |
Primary PDF
| Thumbnail | Title | Date Uploaded | Actions |
|---|---|---|---|
|
|
Probabilistic Zero-Knowledge Proofs for Neural Network Robustness () | 2025-04-22 14:55:36 -0400 |
|
Supplemental Files
| Thumbnail | Title | Date Uploaded | Actions |
|---|