On Linear Stability of SGD and Input-Smoothness of Neural Networks
Speaker: Chao Ma, Stanford University
Time: Wednesday, Feb 2, 2022, 11:00AM - 12:00 NOON, Eastern Time
Zoom Link: contact
tml.online.seminars@gmail.com
Abstract:
The multiplicative structure of parameters and input data in the first layer of neural
networks is explored to build connection between the landscape of the loss function
with respect to parameters and the landscape of the model function with respect to
input data. By this connection, it is shown that flat minima regularize the gradient of
the model function, which explains the good generalization performance of flat minima.
Then, we go beyond the flatness and consider high-order moments of the gradient noise,
and show that Stochastic Gradient Descent (SGD) tends to impose constraints on these
moments by a linear stability analysis of SGD around global minima. Together with the
multiplicative structure, we identify the Sobolev regularization effect of SGD, i.e.
SGD regularizes the Sobolev seminorms of the model function with respect to the input
data. Finally, bounds for generalization error and adversarial robustness are provided
for solutions found by SGD under assumptions of the data distribution.
Speaker's Bio
Dr Chao Ma is currently a Szegö Assistant Professor in the Department of Mathematics at
Stanford University. His research interests lie in the theory and application of
machine
learning. Dr Ma is especially interested in theoretically understanding the behavior of
deep neural networks, including their optimization and generalization. Dr Ma obtained
his PhD from the Program in Applied and Computational Mathematics at Princeton
University, under the supervision of Professor Weinan E. He received his bachelor's
degree from the school of mathematical Science at Peking University.
|
|
|