Presentation Schedule - Spring 2019
| Date | Paper | Presenter | Readers |
| 02/12/2019 | Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization | Daniel Vasquez | Chi-Hua Wang and Bin Du |
| 02/14/2019 | Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes | David Newton | Jiapeng Liu and Mustafa Lokhandwala |
| 02/19/2019 | A Distributed Stochastic Gradient Tracking Method | Bingjing Tang | Larissa Mori and Monika Tomar |
| 02/21/2019 | ADAM : A Method for Stochastic Optimization | Kent Gauen | Nimish Awalgaonkar and Ruixin Wang |
| 02/26/2019 | Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization | Prateek Jaiswal | Yang Xie and Zhanyu Wang |
| 02/28/2019 | Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers | Shujin Jiang | Chi-Hua Wang and Viplove Arora |
| 03/05/2019 | An Optimal First Order Method Based On Optimal Quadratic Averaging Quadratic Averaging | Tian Ye | Gourav Lalitkumar Jhanwar and Tejaskumar Pradipbhai Tamboli |
| 03/07/2019 | NEWTON SKETCH: A NEAR LINEAR-TIME OPTIMIZATION ALGORITHM WITH LINEAR-QUADRATIC CONVERGENCE (Slide) | Chi-Hua Wang | David Newton and Daniel Vasquez |
| 03/19/2019 | The Landscape of Empirical Risk for Non-convex Losses | Ruixin Wang | Prateek Jaiwal and Kent Gauen |
| 03/21/2019 | Optimality guarantees for distributed statistical estimation | Larissa Mori | Bingjing Tang and Shujin Jiang |
Cancelled | | | | |
| 03/26/2019 | Confidence Intervals and Hypothesis Testing for High-Dimensional Regression | Zhanyu Wang | Tian Ye and Prateek Jaiswal |
| 04/02/2019 | Balancing Communication and Computation in Distributed Optimization | Bin Du | Daniel Vasquez |
| 04/04/2019 | Talk by John Birge, Univ. of Chicago, (9:00 — 10:30, RAWL 3082), Dynamic Learning in Strategic Pricing Games | | |
| 04/09/2019 | VARIATIONAL CALCULUS IN THE SPACE OF MEASURES AND OPTIMAL DESIGN | Prateek Jaiswal | Gourav Lalitkumar Jhanwar and David Newton |
| 04/11/2019 | Simulation for American Options: Regression Now or Regression Later? | Yang Xie | Tejaskumar Tamboli and Ruixin Wang |
| 04/16/2019 | EXTRA: AN EXACT FIRST-ORDER ALGORITHM FOR DECENTRALIZED CONSENSUS OPTIMIZATION | Mustafa Lokhandwala | Bingjing Tang and Shujin Jiang |
| 04/18/2019 | STOCHASTIC FIRST- AND ZEROTH-ORDER METHODS FOR NONCONVEX STOCHASTIC PROGRAMMING (CHF 4/9/2019 : Train faster, generalize better: Stability of stochastic gradient descent; ON LARGE-BATCH TRAINING FOR DEEP LEARNING: GENERALIZATION GAP AND SHARP MINIMA) | Chih-hao Fang | Kent Gauen and Zhanyu Wang |
| 04/23/2019 | On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging | Nimish Awalgaonkar | Gourav Lalitkumar Jhanwar |
| 04/25/2019 | ACHIEVING GEOMETRIC CONVERGENCE FOR DISTRIBUTEDOPTIMIZATION OVER TIME-VARYING GRAPHS | Viplove Arora | Bingjing Tang and Larissa Mori |
2018drufazsco
2017gurozdpar
2017kinba
2018pilrudbac
2018puned
2017ber
2016all
2012agabarravwai
2011schroubac
2010boydetal
2009nemjudlansha
1992pol
1977ker
2017pilwai
2018pilrudbac
2014ducjorwaizha
2014javmon
2015ducjor
2017meibaimon
Molchanov-Zuyev2001_Chapter_VariationalCalculusInTheSpaceO
2007glayu
2013ghalan
2017nedolsshi
2014shilinwuyin
Train faster, generalize better
ON LARGE-BATCH TRAINING FOR DEEP LEARNING- GENERALIZATION GAP AND SHARP MINIMA