The Department of Electrical and Computer Engineering invites you to the lecture by Christina Frangoulis and Suhas Diggavi, Professors at the ECE Department- University of California Los Angeles (UCLA) on Wednesday 6/28, 5-7pm in Auditorium #1. The lectures are mainly about research challenges in the area of machine learning.
First Talk (Prof. Christina Fragouli)
Title: Solving Stochastic Contextual Bandits with Linear Bandits Algorithms
Abstract
Linear bandit and contextual linear bandit problems have recently attracted extensive attention as they enable to support impactful active learning applications through elegant formulations.
In linear bandits, a learner at each round plays an action from a fixed action space A and receives a reward that is specified by the inner product of the action and an unknown parameter vector plus noise. Contextual linear bandits add another layer of complexity by enabling at each round the action space to be different, to capture context. The goal is to design an algorithm that learns to play as close as possible to the unknown optimal policy after a number of action plays. The contextual problem is considered more challenging than the linear bandit problem, which can be viewed as a contextual bandit problem with a fixed context. Surprisingly, in this talk, we show that the stochastic contextual problem can be solved as if it is a linear bandit problem. In particular, we establish a novel reduction framework that converts every stochastic contextual linear bandit instance to a linear bandit instance. Our reduction framework opens up a new way to approach stochastic contextual linear bandit problems, and enables significant savings in communication cost in distributed setups. Furthermore, it yields improved regret bounds in a number of instances.
Short CV
Christina Fragouli is a Professor in the Electrical and Computer Engineering Department at UCLA. She received the B.S. degree in Electrical Engineering from the National Technical University of Athens, Athens, Greece, and the M.Sc. and Ph.D. degrees in Electrical Engineering from the University of California, Los Angeles. She has worked at the Information Sciences Center, AT\&T Labs, Florham Park New Jersey, and the National University of Athens. She also visited Bell Laboratories, Murray Hill, NJ, and DIMACS, Rutgers University. Between 2006--2015 she was an Assistant and Associate Professor in the School of Computer and Communication Sciences, EPFL, Switzerland.
She is an IEEE fellow, and has served in several IEEE Committees as member or Chair, including serving as the 2022 President of the IEEE Information Theory Society. She has also served as an Information Theory Society Distinguished Lecturer, and as an Associate Editor for IEEE Communications Letters, for Elsevier Journal on Computer Communication, for IEEE Transactions on Communications, for IEEE Transactions on Information Theory, and for IEEE Transactions on Mobile Communications. Her current research interests are in the intersection of network algorithms, coding techniques, and machine learning.
===================================================================================================================
Second Talk (Prof. Suhas Diggavi)
Τitle: A Statistical Framework for Private Personalized Federated Learning and Estimation
Abstract:
In federated learning, edge nodes collaboratively build learning models from locally generated data. Federated learning (FL) introduces several unique challenges to traditional learning including (i) need for privacy guarantees on the locally residing data (ii) communication efficiency from edge devices (iii) robustness to malicious/malfunctioning nodes (iv) need for personalization given heterogeneity in data and resources. In this talk we focus on privacy and personalization.
We will first describe some of our recent work on trade-offs between privacy and learning performance for federated learning in the context of the shuffled privacy models. Our goals include accounting for (client) sampling, obtaining better compositional bounds (using Renyi DP) as well as ensuring
communication efficiency. We will briefly present our theoretical results along with numerics.
Statistical heterogeneity of data in FL has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. In the second part of the talk we give a statistical framework that unifies several different personalized FL algorithms as well as suggest new algorithms. We develop novel private personalized estimation under this framework. We then use our statistical framework to propose new personalized learning algorithms, including AdaPeD based on information-geometry regularization, which numerically outperforms several known algorithms.
Parts of this talk are joint work with Kaan Ozkara, Antonious Girgis and Deepesh Data, Peter Karouz and Theertha Suresh, and has appeared in AISTATS, NeurIPS, ACM CCS, ICLR etc.
Biography:
Suhas Diggavi is currently a Professor of Electrical and Computer Engineering at UCLA. His undergraduate education is from IIT, Delhi and his PhD is from Stanford University. He has worked as a principal member research staff at AT&T Shannon Laboratories and directed the Laboratory for Information and Communication Systems (LICOS) at EPFL. At UCLA, he directs the Information Theory and Systems Laboratory.
His research interests include information theory and its applications to several areas including machine learning, security & privacy, wireless networks, data compression, cyber-physical systems, bio-informatics and neuroscience; more information can be found at http://licos.ee.ucla.edu.
He has received several recognitions for his research from IEEE and ACM, including the 2013 IEEE Information Theory Society & Communications Society Joint Paper Award, the 2021 ACM Conference on Computer and Communications Security (CCS) best paper award, the 2013 ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc) best paper award, the 2006 IEEE Donald Fink prize paper award among others. He was selected as a Guggenheim fellow in 2021.
He also received the 2019 Google Faculty Research Award, 2020 Amazon faculty research award and 2021 Facebook/Meta faculty research award. He served as a IEEE Distinguished Lecturer and also served on board of governors for the IEEE Information theory society (2016-2021). He is a Fellow of the IEEE.
He is the in-coming Editor-in-Chief of the IEEE BITS Information Theory Magazine and has been an associate editor for IEEE Transactions on Information
Theory, ACM/IEEE Transactions on Networking and other journals and special issues, as well as in the program committees of several IEEE conferences. He has also helped organize IEEE and ACM conferences including serving as the Technical Program Co-Chair for 2012 IEEE Information Theory Workshop (ITW), the Technical Program Co-Chair for the 2015 IEEE International Symposium on Information Theory (ISIT) and General co-chair for ACM Mobihoc 2018. He has 8 issued patents.