Posts by Collection

portfolio

publications

Paper Title Number 1

Published in , 2009


title: “” collection: publications permalink: /publication/2009-10-01-paper-title-number-1

talks

Tutorials:


ML Beyond Rewards: Online Learning with Preference Feedback. [Tutorial Website]
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD). September, 2022.
Battle of Bandits: Online learning from Preference Feedback. [Tutorial Website]
Asian Conference of Machine Learning (ACML). November, 2021.
Bandits for Beginners.
Microsoft Reactor: Data Science and Machine Learning Track. November, 2021.
Short Tutorial: (1). Support Vector Machines, (2). Winnow and Perceptron Algorithms.
M.S. Ramaiah Institute of Technology, Bangalore. May, 2018.
Let's Tame the Bandits!
Undergraduate Summer School, CSA department, IISc Bangalore. July 2018.

Panel:


Trusted and Trustworthy AI. [Summit Website]
The Summit on AI in Society. October, 2022.

Research Talks:


Battle for Better: When and How Can We Learn Faster with Subsetwise Preferences?
The Institute for Data, Econometrics, Algorithms, and Learning (IDEAL) Talk Series. October, 2022
Dueling-Opt: Convex Optimization with Relative Feedback
IFDS Seminar, University of Wisconsin–Madison. October, 2022
Fall OSL Seminar, Northwestern University. October, 2022
Research at TTIC Series. Toyota Technological Institute at Chicago (TTIC), October, 2022
Theory Seminar, CS, Purdue University. November, 2022
Efficient and Optimal Algorithms for Contextual Dueling Bandits under Realizability
RL Theory Seminar. May, 2022
Talks at TTIC Series. Toyota Technological Institute at Chicago (TTIC), August, 2022
CS Seminar, Northwestern University. October, 2022
ML Seminar, University of Illinois, Chicago (UIC). October, 2022
University of Illinois Computer Science Speaker Series, UIUC. October, 2022
ML Seminar, UChicago. October, 2022
October, 2022</dd>
Adversarial Dueling Bandits
NASSCOM AI Gamechangers. April, 2022
Data Science in India, KDD Conference, India, August 2021.
Information Aggregation from Unconventional Feedback
Oracle Research, November, 2021.
Chalmers University of Technology, November, 2021.
Preference based Reinforcement Learning (PbRL)
Microsoft Research Tri-Lab Offsite. November, 2021
RL Track, Microsoft Research Summit. October, 2021
Battling Bandits: Exploiting Subsetwise Preferences
Sabarmati Seminar Series, IIT Gandhinagar. July 2021.
SIERRA-Seminar, Inria, Paris. January 2020.
Microsoft Research, Bangalore, India. October 2019.
EECS department, University of Michigan, Ann Arbor. September, 2019
Computer Science department, Stanford University, Serra Mall, Stanford. August, 2019
EECS Symposium, IISc Bangalore. April, 2019.
Carnegie Mellon University (CMU), Pittsburgh. March, 2019
Qualcomm Research, Bangalore. May, 2018
Bandits, Experts and Rank Aggregation
TCS Research Lab, Bangalore. June, 2018
Indian Institute of Technology (IIT) Madras. November, 2018
Amazon, Bangalore. October, 2018
IBM-IRL, Bangalore. July 2018.
Online Learning with Structured Losses
Conduent Labs, Bangalore. October, 2017.

teaching