Towards understanding theoretical limitations of meta learners
Speaker: James Lucas, University of Toronto
Time: Tuesday, June 15, 2021, 10:00AM - 11:00AM, EST
Zoom Link: contact
tml.online.seminars@gmail.com
Abstract:
Machine learning models have traditionally been developed under the
assumption that the training and test distributions match exactly.
However, recent success in few-shot learning and related problems are
encouraging signs that these models can be adapted to more realistic
settings where train and test distributions differ. In this talk, I will
present novel information-theoretic lower-bounds on the error of learners
that are trained on data from multiple sources and tested on data from
unseen distributions. These bounds depend intuitively on the information
shared between sources of data, and characterize the difficulty of
learning in this setting for arbitrary algorithms (in a minimax sense). I
will conclude with an application of these bounds to a Hierarchical
Bayesian model of meta learning, providing additional risk upper bounds in
this setting.
Speaker's Bio
James Lucas is a PhD candidate at the University of Toronto, where he is
supervised by Richard Zemel and Roger Grosse. James’ research has
primarily focused on the practice and theory behind training deep neural
networks. In particular, he has developed techniques for training provably
smooth neural networks and investigated the loss landscape geometry of
deep nets. More recently, he has studied theoretical limitations of
learning in realistic settings such as few-shot learning. In addition to
his research, James is passionate about video game development and is
currently working towards releasing his first game.
|
|
|