I am a research scientist at Apple MLR, broadly working in the area of Machine Learning theory. I just finished a short-term research visit at Toyota Technological Institute at Chicago (TTIC), and completed my postdoc stinct at Microsoft Research New York City before that. I obtained my P.h.D from the department of Computer Science, Indian Institute of Science, Bangalore, advised by Aditya Gopalan and Chiranjib Bhattacharyya. I was fortunate to intern at Microsoft Research, Bangalore; Inria, Paris; and Google AI, Mountain View.
Research Interests: Machine Learning (esp. Bandits, OL, RL), Optimization, Federated Learning, Mechanism Design.My current research focuses on developing large-scale robust algorithms for sequential decision-making tasks under restricted and unconventional feedback. For example, say preference information, click data, proxy rewards, partial ranking, etc. Some of my other recent ventures also include handling complex prediction environments, like combinatorial decision spaces, dynamic regret, multiplayer games, distributed optimization, etc. Recently I also got interested in the interdisciplinary fields of prediction modeling with algorithmic fairness, differential privacy and strategic mechanisms. Please feel free to reach out if you are interested in brainstorming the gap between the theory and practical aspects of any of these related directions!
[Selected Papers] [Full List] [Google Scholar] [DBLP] [arXiv]Selected Papers:
- ANACONDA: Improved Dynamic Regret Algorithm for Adaptive Non-Stationary Dueling Bandits [Arxiv Version]
Thomas Kleine Buening, Aadirupa Saha
In International Conference on Artificial Intelligence and Statistics, AIStats 2023 - Versatile Dueling Bandits [Arxiv Version]
Aadirupa Saha, Pierre Gaillard
In International Conference on Machine Learning, ICML 2022 - Efficient and Optimal Algorithms for Contextual Dueling Bandits under Realizability [Arxiv Version]
Aadirupa Saha, Akshay Krishnamurthy
In Algorithmic Learning Theory, ALT 2022 - Optimal Algorithms for Stochastic Contextual Dueling Bandits
Aadirupa Saha
In Neural Information Processing Systems, NeurIPS 2021 - Dueling Convex Optimization
Aadirupa Saha, Tomer Koren, Yishay Mansour
In International Conference on Machine Learning, ICML 2021 - Adversarial Dueling Bandits [Arxiv Version]
Aadirupa Saha, Tomer Koren, Yishay Mansour
In International Conference on Machine Learning, ICML 2021 - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [Arxiv Version]
Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain
In International Conference on Machine Learning, ICML 2021 - From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model [Arxiv Version]
Aadirupa Saha, Aditya Gopalan
In International Conference on Machine Learning, ICML 2020 - Best-item Learning in Random Utility Models with Subset Choices [Arxiv Version]
Aadirupa Saha, Aditya Gopalan
In International Conference on Artificial Intelligence and Statistics, AIStats 2020 - Combinatorial Bandits with Relative Feedback [Arxiv Version]
Aadirupa Saha, Aditya Gopalan
In Neural Information Processing Systems, NeurIPS 2019 - PAC Battling Bandits in the Plackett-Luce Model [Arxiv Version]
Aadirupa Saha, Aditya Gopalan
In Algorithmic Learning Theory, ALT 2019