IE Seminar: “Toward the Foundation of Dynamic Multi-agent Learning”, Muhammed Omer Sayin, 1:30PM November 11 (EN)

Seminar on November 11: “Toward the Foundation of Dynamic Multi-agent Learning” by Muhammed Omer Sayin, Bilkent University

Speaker: Muhammed Omer Sayin, Bilkent University

Date & Time: November 11, 2022, Friday 13:30

Place: EA-409

Title: Toward the Foundation of Dynamic Multi-agent Learning

Abstract: Many of the forefront applications of reinforcement learning involve multiple agents and dynamic environments, e.g., playing chess and Go games, autonomous driving, and robotics. Unfortunately, there has been limited progress on the foundation of dynamic multi-agent learning, especially independent learning in which intelligent agents take actions consistent with their objectives, e.g., based on behavioral learning models, quite contrary to the plethora of algorithmic solutions.

In this talk, I will present a new framework and several new independent learning dynamics. These dynamics converge almost surely to an equilibrium or converge selectively to an equilibrium maximizing the social welfare in specific but important classes of Markov games — ideal models for decentralized multi-agent reinforcement learning. These results can also be generalized to the cases where agents do not know the model of the environment, do not observe opponent actions and can adopt different learning rates. I will conclude my talk with several remarks on possible future research directions for the framework presented.

Bio: Muhammed Omer Sayin is an Assistant Professor at Bilkent University, Department of Electrical and Electronics Engineering. He was a Postdoctoral Associate at the Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology (MIT). He got his Ph.D. from the University of Illinois at Urbana-Champaign (UIUC) in December 2019. During his Ph.D., he had two research internships in Toyota InfoTech Labs, Mountain View, CA. He got his M.S. and B.S. from Bilkent University, Turkey, respectively, in 2015 and 2013. He is a recipient of the TUBITAK 2232-B International Fellowship for Early-Stage Researchers. The overarching theme of his research is to develop the theoretical and algorithmic foundation of learning and autonomy in complex, dynamic, and multi-agent systems.