Annie Marsden. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian Two months later, he was found lying in a creek, dead from . CoRR abs/2101.05719 ( 2021 ) Multicalibrated Partitions for Importance Weights Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder ALT, 2022 arXiv . Aaron Sidford. Associate Professor of . Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford; Computing maximum flows with augmenting electrical flows. Contact: dwoodruf (at) cs (dot) cmu (dot) edu or dpwoodru (at) gmail (dot) com CV (updated July, 2021) Articles 1-20. Improves the stochas-tic convex optimization problem in parallel and DP setting. Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. 475 Via Ortega Aaron Sidford joins Stanford's Management Science & Engineering department, launching new winter class CS 269G / MS&E 313: "Almost Linear Time Graph Algorithms." The authors of most papers are ordered alphabetically. Done under the mentorship of M. Malliaris. International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods Verified email at stanford.edu - Homepage. with Aaron Sidford (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. Lower bounds for finding stationary points I, Accelerated Methods for NonConvex Optimization, SIAM Journal on Optimization, 2018 (arXiv), Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification. ! to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching I enjoy understanding the theoretical ground of many algorithms that are My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. Sequential Matrix Completion. In Sidford's dissertation, Iterative Methods, Combinatorial . We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. "FV %H"Hr ![EE1PL* rP+PPT/j5&uVhWt :G+MvY c0 L& 9cX& Simple MAP inference via low-rank relaxations. Our method improves upon the convergence rate of previous state-of-the-art linear programming . I am a senior researcher in the Algorithms group at Microsoft Research Redmond. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. The design of algorithms is traditionally a discrete endeavor. arXiv | conference pdf (alphabetical authorship) Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with . Title. stream Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization United States. ICML, 2016. /Length 11 0 R Source: www.ebay.ie to appear in Innovations in Theoretical Computer Science (ITCS), 2022, Optimal and Adaptive Monteiro-Svaiter Acceleration I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . [last name]@stanford.edu where [last name]=sidford. with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford Before attending Stanford, I graduated from MIT in May 2018. Daniel Spielman Professor of Computer Science, Yale University Verified email at yale.edu. [pdf] SODA 2023: 4667-4767. Secured intranet portal for faculty, staff and students. of practical importance. COLT, 2022. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. [pdf] [poster] I am fortunate to be advised by Aaron Sidford. We forward in this generation, Triumphantly. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff. Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. 2019 (and hopefully 2022 onwards Covid permitting) For more information please watch this and please consider donating here! Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. 2023. . I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. Some I am still actively improving and all of them I am happy to continue polishing. In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners. Summer 2022: I am currently a research scientist intern at DeepMind in London. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. with Aaron Sidford Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford. ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). Management Science & Engineering Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. Links. 4026. If you see any typos or issues, feel free to email me. University, Research Institute for Interdisciplinary Sciences (RIIS) at In submission. [i14] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian: ReSQueing Parallel and Private Stochastic Convex Optimization. With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams. Aaron's research interests lie in optimization, the theory of computation, and the . Department of Electrical Engineering, Stanford University, 94305, Stanford, CA, USA Publications and Preprints. In each setting we provide faster exact and approximate algorithms. [pdf] /N 3 Before attending Stanford, I graduated from MIT in May 2018. << Anup B. Rao. In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. However, even restarting can be a hard task here. I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in I am broadly interested in optimization problems, sometimes in the intersection with machine learning Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! Here are some lecture notes that I have written over the years. I am a fourth year PhD student at Stanford co-advised by Moses Charikar and Aaron Sidford. With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). Many of my results use fast matrix multiplication arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. Np%p `a!2D4! In this talk, I will present a new algorithm for solving linear programs. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. I am broadly interested in mathematics and theoretical computer science. I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. Previously, I was a visiting researcher at the Max Planck Institute for Informatics and a Simons-Berkeley Postdoctoral Researcher. ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle Try again later. Email: [name]@stanford.edu Try again later. 4 0 obj I often do not respond to emails about applications. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. My long term goal is to bring robots into human-centered domains such as homes and hospitals. [c7] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian: Private Convex Optimization in General Norms. Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. I completed my PhD at She was 19 years old and looking forward to the start of classes and reuniting with her college pals. We also provide two . ", "Team-convex-optimization for solving discounted and average-reward MDPs! with Yair Carmon, Arun Jambulapati and Aaron Sidford Computer Science. 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. 2021. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. Mail Code. Unlike previous ADFOCS, this year the event will take place over the span of three weeks. aaron sidford cvis sea bass a bony fish to eat. ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! Aaron Sidford. Research Institute for Interdisciplinary Sciences (RIIS) at By using this site, you agree to its use of cookies. ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). If you see any typos or issues, feel free to email me. Allen Liu. Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. The site facilitates research and collaboration in academic endeavors.
Jayson Tatum Wingspan 2k21, Rossano Rubicondi Funeral, Articles A