Realizable Learning is All You Need
Speaker: Max Hopkins, UCSD
Time: Monday, April 24, 2023, 11:00AM - 12:00 Noon, Eastern Time
Zoom Link: contact
tml.online.seminars@gmail.com
Abstract:
The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. This classical result, dating back to Vapnik and Chervonenkis [VC74], roughly states:
Given a set X and family of binary classifiers H, the ability to learn a classifier h \in H from labeled examples of the form (x, h(x)) is equivalent to performing a (seemingly) harder task: given samples from any distribution
D over X \times {0,1}, find the best approximation to D in H.
Surprisingly, despite its foundational importance (and known variants across essentially every well-studied model of supervised learning), we still lack a general explanation for this behavior---typical proofs are complicated,
relying on various third party properties, and usually break down under the slightest change of setting.
In this talk, we give the first unified explanation of this 50-year-old phenomenon: an elementary blackbox reduction from agnostic to realizable learning that holds across a wide host of classical settings. We discuss how our
reduction stems from a new connection between learning and non-uniform covers, and (time-willing) show the viewpoint leads to new results in more challenging settings such as learning under distributional assumptions, general
loss, and privacy constraints.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan
Speaker's Bio
Max Hopkins is a PhD Candidate at UC San Diego supported by NSF GRFP and ARCS fellowships. Max is broadly interested in the role of structure and randomness in computation, especially high dimensional expansion and
combinatorial structure in learning. His work bridges the development and application of these tools across areas such as statistical learning theory, algorithmic stability, boolean function analysis, and hardness of
approximation.
|
|
|