2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 | 2025
The seminar is jointly sponsored by Temple and Penn. The organizers are Brian Rider and Atilla Yilmaz (Temple), and Jiaoyang Huang, Jiaqi Liu, Robin Pemantle and Xin Sun (Penn).
Talks are Tuesdays 3:30 - 4:30 pm and are held either in Wachman Hall (Temple) or David Rittenhouse Lab (Penn) as indicated below.
For a chronological listing of the talks, click the year above.
Shanyin Tong, Columbia University
Mean-field games (MFGs) model non-cooperative games among large populations of agents and are widely applied in areas such as traffic flow, finance, and epidemic control. Inverse mean-field games address the challenge of inferring environmental factors from observed agent behavior. The coupled forward-backward structure of MFG equations makes solving these problems difficult and adds even greater complexity to their inverse problems. In this talk, I will introduce a policy iteration method for solving inverse MFGs. This method simplifies the problem by decoupling it into solving linear PDEs and linear inverse problems, leading to significant computational efficiency. The approach is flexible, accommodating a variety of numerical methods and machine learning tools. I will also present theoretical results that guarantee the convergence of our proposed method, along with numerical examples demonstrating its accuracy and efficiency.
Mert Gurbuzbalaban, Rutgers University
Langevin algorithms, integral to Markov Chain Monte Carlo methods, are crucial in machine learning, particularly for Bayesian inference in high-dimensional models and addressing challenges in stochastic non-convex optimization prevalent in deep learning. This talk delves into the practical aspects of stochastic Langevin algorithms through three illuminating examples. First, it explores their role in non-convex optimization, focusing on their efficacy in navigating complex landscapes. The discussion then extends to decentralized Langevin algorithms, emphasizing their relevance in distributed optimization scenarios, where data is dispersed across multiple sources. Lastly, the focus shifts to constrained sampling, aiming to sample from a target distribution subject to constraints. In each scenario, we introduce new algorithms with convergence guarantees and showcase their performance and scalability to large datasets through numerical examples.
Doron Puder, Tel Aviv University
TBA
Benedek Valko, University of Wisconsin–Madison
TBA
2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 | 2025