Tutorials:


ML Beyond Rewards: Online Learning with Preference Feedback. [Tutorial Website]
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD). September, 2022.
Battle of Bandits: Online learning from Preference Feedback. [Tutorial Website]
Asian Conference of Machine Learning (ACML). November, 2021.
Bandits for Beginners.
Microsoft Reactor: Data Science and Machine Learning Track. November, 2021.
Short Tutorial: (1). Support Vector Machines, (2). Winnow and Perceptron Algorithms.
M.S. Ramaiah Institute of Technology, Bangalore. May, 2018.
Let's Tame the Bandits!
Undergraduate Summer School, CSA department, IISc Bangalore. July 2018.

Research Talks:


Efficient and Optimal Algorithms for Contextual Dueling Bandits under Realizability
RL Theory Seminar. May, 2022
Adversarial Dueling Bandits
NASSCOM AI Gamechangers. April, 2022
Data Science in India, KDD Conference, India, August 2021.
Information Aggregation from Unconventional Feedback
Oracle Research, November, 2021.
Chalmers University of Technology, November, 2021.
Preference based Reinforcement Learning (PbRL)
Microsoft Research Tri-Lab Offsite. November, 2021
RL Track, Microsoft Research Summit. October, 2021
Battling Bandits: Exploiting Subsetwise Preferences
Sabarmati Seminar Series, IIT Gandhinagar. July 2021.
SIERRA-Seminar, Inria, Paris. January 2020.
Microsoft Research, Bangalore, India. October 2019.
EECS department, University of Michigan, Ann Arbor. September, 2019
Computer Science department, Stanford University, Serra Mall, Stanford. August, 2019
EECS Symposium, IISc Bangalore. April, 2019.
Carnegie Mellon University (CMU), Pittsburgh. March, 2019
Qualcomm Research, Bangalore. May, 2018
Bandits, Experts and Rank Aggregation
TCS Research Lab, Bangalore. June, 2018
Indian Institute of Technology (IIT) Madras. November, 2018
Amazon, Bangalore. October, 2018
IBM-IRL, Bangalore. July 2018.
Online Learning with Structured Losses
Conduent Labs, Bangalore. October, 2017.