Seminar: “Online Learning in Complex Domains,” Dr. Cem Tekin (EE), EA-409, 1:30PM April 27 (EN)

Seminar: “Online Learning in Complex Domains”

Asst. Prof. Dr. Cem Tekin, Department of Electrical and Electronics Engineering, Bilkent University
April 27, Friday 13:30
EA-409

Abstract:
Recommender systems, dynamic pricing, medical diagnosis, network routing, etc., require on-going learning and decision-making in real time. These – and many others – represent perfect examples of the opportunities and difficulties present in big data: the available information often arrives from a variety of sources and has diverse features so that integrating what is learned is subject to the curse of dimensionality. Moreover, these problems often involve large strategy sets, and multiple and possibly conflicting objectives, which make efficient online learning a very challenging task. In this talk, I will present recent pieces of contextual and combinatorial bandit models addressing these challenges.

First, I will discuss how different notions of optimality can be used to define the performance of learning algorithms when the learning task involves multiple objectives. Then, I will describe the key insights on designing learning algorithms that can efficiently learn to optimise these objectives, including how to aggregate information obtained from the past decisions, learn the relevant information embedded into a small subset of the strategy set, and balance the tradeoff between exploration and exploitation.

Bio:
Cem Tekin is an Assistant Professor in Electrical and Electronics Engineering Department, Bilkent University. He received PhD in Electrical Engineering: Systems, MS in Applied Mathematics and MSE in Electrical Engineering: Systems, from the University of Michigan in 2013, 2011 and 2010, respectively. He received BS degree in Electrical and Electronics Engineering from ODTÜ in 2008 (valedictorian). From 2013 to 2015, he was a postdoctoral scholar in Electrical Engineering Department, UCLA. Cem has authored or coauthored over 50 research papers, 3 book chapters and a research monograph. His research interests include bandit problems, reinforcement learning, multi-agent systems and Markov decision processes. As applications, he considers learning problems in healthcare informatics, finance, real-time stream mining, recommender systems, influence maximization and cognitive radio networks.