White-Box Deep (Convolution) Networks from First Principles
Speaker: Professor Yi Ma, UC Berkeley
Time: Tuesday, Aug 17, 2021, 11:00AM - 12:00 Noon, Eastern Time
Zoom Link: contact
tml.online.seminars@gmail.com
Abstract:
In this talk, we offer an entirely “white box’’ interpretation of deep
(convolution) networks from the perspective of data compression (and group
invariance). In particular, we show how modern deep layered architectures,
linear (convolution) operators and nonlinear activations, and even all
parameters can be derived from the principle of maximizing rate reduction
(with group invariance). All layers, operators, and parameters of the
network are explicitly constructed via forward propagation, instead of
learned via back propagation. All components of so-obtained network,
called ReduNet, have precise optimization, geometric, and statistical
interpretation. There are also several nice surprises from this principled
approach: it reveals a fundamental tradeoff between invariance and
sparsity for class separability; it reveals a fundamental connection
between deep networks and Fourier transform for group invariance – the
computational advantage in the spectral domain (why spiking neurons?);
this approach also clarifies the mathematical role of forward propagation
(optimization) and backward propagation (variation). In particular, the
so-obtained ReduNet is amenable to fine-tuning via both forward and
backward (stochastic) propagation, both for optimizing the same objective.
This is joint work with students Yaodong Yu, Ryan Chan, Haozhi Qi of
Berkeley, Dr. Chong You now at Google Research, and Professor John Wright
of Columbia University. A related paper can be found at:
< a href="https://arxiv.org/abs/2105.10446">
https://arxiv.org/abs/2105.10446
Speaker's Bio
Yi Ma is a Professor at the Department of Electrical Engineering and
Computer Sciences at the University of California, Berkeley. His research
interests include computer vision, high-dimensional data analysis, and
intelligent systems. Yi received his Bachelor’s degrees in Automation and
Applied Mathematics from Tsinghua University in 1995, two Masters degrees
in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley
in 2000. He has been on the faculty of UIUC ECE from 2000 to 2011, the
principal researcher and manager of the Visual Computing group of
Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the
School of Information Science and Technology of ShanghaiTech University
from 2014 to 2017. He then joined the faculty of UC Berkeley EECS in 2018.
He has published about 60 journal papers, 120 conference papers, and three
textbooks in computer vision, generalized principal component analysis,
and high-dimensional data analysis. He received the NSF Career award in
2004 and the ONR Young Investigator award in 2005. He also received the
David Marr prize in computer vision from ICCV 1999 and best paper awards
from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV
2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and
SIAM.
|
|
|