About me
I am an Assistant Professor in the Industrial Engineering Department at the University of Pittsburgh. Before joining the University of Pittsburgh, I was a visiting postdoctoral researcher at Google in the Optimization and Algorithms group in New York. In 2019, I received my Ph.D. from the Department of Management Science and Engineering at Stanford University where Professor Yinyu Ye was my advisor.
Research
My research focuses on continuous optimization, with a penchant for local optimization methods such as gradient descent. I aim to develop reliable and efficient algorithms built on solid mathematical foundations. The goal of my research is to build new optimization tools for operations research and machine learning. I typically aim to improve existing optimization software in terms of speed, scalability, accuracy, usability, and reliability. My work spans the spectrum from fundamental optimization theory to software development. I build both general purpose optimization solvers and solvers targeted at specific applications such as drinking water networks, electric grids, and certifiable deep learning.
Learn more about my research in the optimizing you podcast.
Please see my scholar page for an up to date publications list.
My Ph.D. thesis was Principled Algorithms for Finding Local Minimizers.
You can contact me at ohinder at pitt dot edu.
Courses
Applied data analytics - IE 2064
Advanced nonlinear optimization - IE 3080
Introduction to optimization for machine learning - IE 1187
Industry impact
My research on nonconvex interior point methods inspired Hexaly's own implementation.
My work on first-order methods for linear programming is part of the Google Operation Research tools package. Several other companies have developed GPU variants of this method including NVIDIA and COPT.
My work with DeepMind has led to state-of-the-art techniques for neural network certification.
Notable awards
Conference on Learning Theory (2024) best paper prize (joint with Yair Carmon) for the Price of Adaptivity in Stochastic Convex Optimization.
Award committee citation: A fundamental problem in convex optimization is to establish minimax rates for each function class and find algorithms that achieve these rates. However, the minimax approach does not fully capture the reality of a numerical optimization process because it does not take into account that some characteristics of the function class are unknown to the algorithm. This paper fills this gap by proving tight lower bounds on the multiplicative increase in the rates due to algorithm uncertainty about the function class parameters. In doing so, this paper establishes the optimality of old and recent "parameter-free" and "adaptive" stochastic optimization algorithms, a problem that has remained open for the past 10 years. Moreover, the delicate information-theoretic methods introduced in their proofs will likely inspire other applications as well.
Beale--Orchard-Hays (2024) prize (joint with David Applegate, Mateo Díaz, Haihao Lu, Miles Lubin, Brendan O'Donoghue, and Warren Schudy) for Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient and Faster first-order primal-dual methods for linear programming using restarts and sharpness.
Award committee citation: the committee commends the following aspects of this work. Its long-term potential to make first-order methods a practical option to solve large-scale linear programming problems. Its adaptability to GPUs and other parallel computing architectures. The careful algorithmic engineering work to make the methods practical. The sophisticated and innovative analysis used to justify and describe the performance of the algorithms.
Current PhD students
Fadi Hamad
Jared Lawerence
Akif Khan
Funding acknowledgements
My work at Pitt is supported by the NSF-BSF program, the Air Force Office of Scientific Research (AFOSR), the Mascaro Center for Sustainable Innovation (MCSI), and a Google research scholar award.