Short Bio [In third person]
Aadirupa Saha has been an Assistant Professor in the Department of Computer Science at the University of Illinois Chicago (UIC) since Fall 2025. She is a member of the UIC CS Theory group, as well as IDEAL Institute. Prior to this, she was a Research Scientist at Apple MLR, working on Machine Learning theory. She completed her postdoctoral research at Microsoft Research (NYC) and earned her PhD from the Indian Institute of Science (IISc), Bangalore.
Saha's primary research focuses on AI alignment through Reinforcement Learning with Human Feedback (RLHF), with applications in language models, assistive robotics, autonomous systems, and personalized AI. At a high level, her work aims to develop robust and scalable AI models for designing prediction systems under uncertain and partial feedback.
[Optional] Specifically, Saha is deeply motivated by the tremendous potential of AI to democratize learning—reshaping our current education system into a truly adaptive, accessible, and personalized experience for every learner! Driven by this transformative power of generative AI and language models, she envisions building the foundations for equitable, intelligent education systems that turn this vision into reality. Her research focuses on developing futuristic educational models by leveraging her expertise in AI alignment with human feedback, alongside tools from Machine Learning (Online Learning, Bandits, and RL theory), Optimization, Federated Learning, Differential Privacy, and Mechanism Design.
Saha has organized several workshops and tutorials in recent years, notably keynote talk at DA2PL Conferenc, [NeurIPS, 2023] tutorial on Preference Learning, [UAI, 2023] tutorial on Federated Optimization, two tutorials at [ECML, 2022] , [ACML, 2021], two ICML workshops [ICML, 2023] and [ICML, 2022], and two TTIC workshops [TTIC, 2023] and [TTIC, 2022]. In addition, Aadirupa has also served in several panel discussions and senior reviewing committees for major ML conferences.
- Learning Alignment with Human Feedback
- DA2PL Conferenc, Belgium, April 16-17, 2026.
- Do you Prefer Learning with Preferences? [Tutorial Website] [NeurIPS Website]
- With Aditya Gopalan. Our (amazing) panel: Yoshua Bengio · Craig Boutilier · Elad Hazan · Robert Nowak · Tobias Schnabel
- 37th Conference on Neural Information Processing Systems (NeurIPS), New Orleans. Dec 11th, 2023.
- Online Optimization meets Federated Learning. [UAI Website] [Tutorial Recording]
- With Kshitij Kumar Patel, TTIC.
- 39th Conference on Uncertainty in Artificial Intelligence (UAI), Pittsburgh. July 31st, 2023.
- ML Beyond Rewards: Online Learning with Preference Feedback. [Tutorial Website]
- European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD). September, 2022.
- Battle of Bandits: Online learning from Preference Feedback. [Tutorial Website]
- Asian Conference of Machine Learning (ACML). November, 2021.
- Bandits for Beginners. [Video link]
- Microsoft Reactor: Data Science and Machine Learning Track. November, 2021.
- Preference based RL. [Video link]
- RL Track, Microsoft Research Summit. October, 2021.
- Short Tutorial: (1). Support Vector Machines, (2). Winnow and Perceptron Algorithms.
- M.S. Ramaiah Institute of Technology, Bangalore. May, 2018.
- Let's Tame the Bandits! [Video link]
- Undergraduate Summer School, CSA department, IISc Bangalore. July 2018.
- Preference based Learning through a Critical Lens [Tutorial Website]
- Do you Prefer Learning with Preferences? (at NeurIPS'24 Tutorial). December, 2023.
- Next decade of Federated Learning and role of Theory [Workshop Website]
- New Frontiers in Federated Learning, September, 2023.
- Trusted and Trustworthy AI [Summit Website]
- The Summit on AI in Society at the University of Chicago, October, 2022.
- Principled Methods for Leveraging Human Feedback towards AI Alignment
- IDEAL Annual Meeting and Industry Day, UIC Chicago. June 2024
- Online Federated Learning
- Federated and Collaborative Learning Workshop, Simons Institute, UC Berkeley. July 2023 [Talk Recording]
- Dueling-Opt: Convex Optimization with Relative Feedback
- IFDS Seminar, University of Wisconsin–Madison. October, 2022
- Fall OSL Seminar, Northwestern University. October, 2022
- Research at TTIC Series. Toyota Technological Institute at Chicago (TTIC), October, 2022
- Theory Seminar, CS, Purdue University. November, 2022
- Samueli CS department Seminar, UCLA. June, 2023
- Personalized Prediction Models with Federated Human Preferences
- TILOS Seminar, University of California, San Diego. November, 2023
- CATS Seminar, University of Maryland. November, 2023
- ML-Opt Seminar, University of Washington. October, 2023
- Efficient and Optimal Algorithms for Contextual Dueling Bandits under Realizability
- UMich AI Symposium. October, 2023
- Theory-ML Seminar, CS dept, Carnegie Mellon University (CMU). August, 2023
- RL Theory Seminar. May, 2022 [Talk Recording]
- Talks at TTIC Series. Toyota Technological Institute at Chicago (TTIC), August, 2022
- CS Seminar, Northwestern University. October, 2022
- ML Seminar, University of Illinois, Chicago (UIC). October, 2022
- University of Illinois Computer Science Speaker Series, UIUC. October, 2022
- ML Seminar, UChicago. October, 2022
- Adversarial Dueling Bandits
- NASSCOM AI Gamechangers. April, 2022
- Data Science in India, KDD Conference, India, August 2021.
- Information Aggregation from Unconventional Feedback
- Oracle Research, November, 2021.
- Chalmers University of Technology, November, 2021.
- Battle for Better: When and How Can We Learn Faster with Subsetwise Preferences?
- Spring Seminar, UT Austin. March 2023
- ISyE Seminar, Goergia Tech. March 2023
- The Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) Talk Series. October, 2022
- Preference based Reinforcement Learning (PbRL)
- Microsoft Research Tri-Lab Offsite. November, 2021
- RL Track, Microsoft Research Summit. October, 2021
- Battling Bandits: Exploiting Subsetwise Preferences
- SIERRA-Seminar, Inria, Paris. January 2020.
- Microsoft Research, Bangalore, India. October 2019.
- EECS department, University of Michigan, Ann Arbor. September, 2019
- Computer Science department, Stanford University, Serra Mall, Stanford. August, 2019
- EECS Symposium, IISc Bangalore. April, 2019.
- Carnegie Mellon University (CMU), Pittsburgh. March, 2019
- Qualcomm Research, Bangalore. May, 2018
- Bandits, Experts and Rank Aggregation
- TCS Research Lab, Bangalore. June, 2018
- Indian Institute of Technology (IIT) Madras. November, 2018
- Amazon, Bangalore. October, 2018
- IBM-IRL, Bangalore. July, 2018.
- Online Learning with Structured Losses
- Conduent Labs, Bangalore. October, 2017.