Title: Is Machine Learning Tractable? --- Three Vignettes
Many tasks in machine learning (especially unsupervised learning) are provably intractable: NP-hard or worse. Nevertheless, researchers have developed heuristic algorithms to solve these tasks in practice. In most cases, there are no provable guarantees on the performance of these algorithms/heuristics ---neither on their running time, nor on the quality of solutions they return. Can we change this state of affairs?
This talk will suggest that the answer is yes, and cover three recent works as illustration. (a) A new algorithm for learning topic models.
This concerns a new algorithm for topic models (including the Linear Dirichlet Allocations of Blei et al. but also works for more general models) that provably works in theory under some reasonable assumptions and is also up to 50 times faster than existing software in practice. It relies upon a new procedure for nonnegative matrix factorization. (b) What classifiers are worth learning? (c) Provable ICA with unknown gaussian noise.
(Based upon joint works with Rong Ge, Ravi Kannan, Ankur Moitra, Sushant Sachdeva.)