The agenda of the workshop can be found here.
The list of lectures and poster presentations can be found here.
Lectures are 20-minutes long (including Q&As). The format of posters is flexible, but A0 size and landscape orientation are recommended.
Title: Learning Simulation: Graphs, Physics, and Weather
Abstract: Simulation is one of the most important tools in science and engineering. However accurate simulation faces two challenges: (1) heavy compute requirements, and (2) sophisticated underlying equations which require deep expertise to formulate. Recent advances in machine learning-based simulation are now addressing both challenges by (1) allowing dynamics to be modeled with cheaper representations and computations, and (2) learning dynamics models directly from data. This talk will survey advances in graph-based learned simulation from the past few years, then deep dive into recent advances in machine learning-based weather prediction that have resulted in learned simulators that outperform the top operational forecasting systems in the world.
Title: Graph Spectral Processing and Analysis for 3D Point Clouds [slides]
Abstract: Geometric data acquired from real-world scenes, e.g., 2D depth images, 3D point clouds, and 4D dynamic point clouds, have found a wide range of applications including autonomous driving, robotics, augmented and virtual reality, surveillance, etc. Due to irregular sampling patterns of most geometric data, traditional image / video processing methodologies are limited, while Graph Signal Processing (GSP)—a fast-developing field in the signal processing community—enables processing signals that reside on irregular domains. Further, GSP provides insightful spectral interpretations and domain knowledge for the recently developed Graph Neural Networks (GNNs), leading to interpretability and robustness of GNNs. In this talk, I will mainly introduce our works on graph-based representation, reconstruction and analysis of 3D point clouds to illustrate the power of graph spectral processing and analysis.
Bio: Wei Hu is an Assistant Professor with Wangxuan Institute of Computer Technology, Peking University. She received the B.S. degree in Electrical Engineering from the University of Science and Technology of China in 2010, and the Ph.D. degree in Electronic and Computer Engineering from the Hong Kong University of Science and Technology in 2015. She was a Researcher with Technicolor, Rennes, France, from 2015 to 2017. Her research interests are graph signal processing, graph-based machine learning and 3D visual computing. She has authored over 60 international journal and conference publications, with several paper awards including Best Paper Candidate in CVPR 2021 and Best Student Paper Runner Up Award in ICME 2020. She was awarded the 2021 IEEE Multimedia Rising Star Award—Honorable Mention, and Boya Young Fellow of Peking University. She serves as an Associate Editor for Signal Processing Magazine, IEEE Transactions on Signal and Information Processing over Networks, etc. She is also a TC member of IEEE MMSP, and MSA.
Title: Reliable AI for Graph Signal Processing
Abstract: The new wave of artificial intelligence is impacting industry, public life, and the sciences in an unprecedented manner. A particular emphasis is on graph data due to the importance of application settings such as recommender systems, social media, or molecular dynamics. However, one current major drawback is the lack of reliability. The goal of this lecture is to first provide an introduction into this new vibrant research area. We will then survey recent advances, in particular, concerning performance guarantees and explainability methods, which are key to ensure reliability. Finally, we will discuss fundamental limitations in terms of computability, which seriously affect diverse aspects of reliability, and reveal a surprising connection to novel computing approaches such as neuromorphic computing and quantum computing.
Bio: Gitta Kutyniok (https://www.ai.math.lmu.de/kutyniok) currently has a Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians-Universität München. She received her Diploma in Mathematics and Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute of Technology, and Washington University in St. Louis. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she held an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020. In addition, Gitta Kutyniok holds an Adjunct Professorship in Machine Learning at the University of Tromso since 2019. Gitta Kutyniok has received various awards for her research such as an award from the Universität Paderborn in 2003, the Research Prize of the Justus-Liebig Universität Gießen and a Heisenberg-Fellowship in 2006, and the von Kaven Prize by the DFG in 2007. She was invited as the Noether Lecturer at the ÖMG-DMV Congress in 2013, a plenary lecturer at the 8th European Congress of Mathematics (8ECM) in 2021, and the lecturer of the London Mathematical Society (LMS) Invited Lecture Series in 2022. She was also honored by invited lectures at both the International Congress of Mathematicians 2022 (ICM 2022) and the International Congress on Industrial and Applied Mathematics (ICIAM 2023). Moreover, she was elected as a member of the Berlin-Brandenburg Academy of Sciences and Humanities in 2017 and of the European Academy of Sciences in 2022, and became a SIAM Fellow in 2019. She is currently the main coordinator of the Research Focus “Next Generation AI” at the Center for Advanced Studies at LMU and the DFG-Priority Program “Theoretical Foundations of Deep Learning”, serves as Vice President-at-Large of SIAM, and acts as LMU-Director of the Konrad Zuse School of Excellence in Reliable AI (relAI) in Munich. Gitta Kutyniok’s research work covers, in particular, the areas of applied and computational harmonic analysis, artificial intelligence, compressed sensing, deep learning, imaging sciences, inverse problems, and applications to life sciences, robotics, and telecommunication.
Title: AI and Medicine: Graph and Hypergraph Representation Learning
Abstract: In this talk I will focus on how to build a digital patient twin using graph representation and considering physiological (cardiovascular), clinical (inflammation) and molecular variables (multi omics and genetics). I will consider different pathologies such as inflammating and immuno senescence through the use of neural graph ODEs. I will discuss how this approach could also keep the clinicians in the loop to avoid excessive automatisation using logic and explainer frameworks.
Bio: Pietro Liò received the PhD degree in complex systems and non linear dynamics from the School of Informatics, dept of Engineering, University of Firenze, Italy and the PhD degree in theoretical genetics from the University of Pavia, Italy. He is currently a professor of computational biology with the Department of Computer Science and Technology, University of Cambridge and a member of the Artificial Intelligence Group. He is also a member of the Cambridge Centre for AI in medicine, ELLIS (European Laboratory for Learning and Intelligent Systems), Academia Europaea, Asia Pacific Artificial Intelligence Association, His research interests include graph representation learning, AI and Medicine, Systems Biology.
Title: Fourier Analysis with Direction [slides]
Abstract: Mainstream graph signal processing (GSP) provides no general solution in the case of directed edges in the signal domain, which is unsatisfactory (and somewhat surprising) given that classical discrete time is directed. In this talk I first present a possible solution for arbitrary directed graphs by generalizing the concept of cyclic boundary condition associated with the DFT. Then I present a novel approach to Fourier analysis and signal processing, fundamentally different from GSP, that targets signals whose domain is partially ordered. Important examples include power sets, meet/join lattices, and directed acyclic graphs. I present the theory and some prototypical applications in signal processing and machine learning.
Bio: Markus Püschel is a Professor and former Department Head of Computer Science at ETH Zurich, Switzerland. Before, he was a Professor of Electrical and Computer Engineering at Carnegie Mellon University, where he still has an adjunct status. He is an IEEE Fellow and won the main student teaching awards at both CMU and ETH. As a department head he initiated a major faculty growth program and co-founded the Swiss Data Science Center. For more information on activities and other research interests, please visit https://acl.inf.ethz.ch/.
Title: Low Pass Graph Signal Processing - Data Modeling, Inference, and Beyond [slides]
Abstract: As a key building block in graph signal processing (GSP), graph filters has been used for giving an SP interpretation for network dynamics and the resultant graph data. SP methods such as frequency analysis, system identification, etc., have been applied with analogous interpretation to graph data, allowing us to interpret graph data as low/mid/high pass graph signals. This talk concentrates on GSP with low pass graph signals whose underlying graph filter attenuates contents in the high graph frequencies while retaining those in the low graph frequencies. Notice that this effectively implies the common notion of “smooth graph signals”. We first show the prevalence of low pass graph signals in data models such as for social networks, financial markets. We then demonstrate how properties of low pass graph signals can be leveraged for various forms of graph inference tasks from data. We will discuss recent results such as graph topology learning, inference of graph topology features such as community, centrality, detection of low pass graph signals, etc. The effects of low pass filtering in graph machine learning will also be discussed.
Bio: Hoi-To Wai received his PhD degree from Arizona State University (ASU) in Electrical Engineering in Fall 2017, B. Eng. (with First Class Honor) and M. Phil. degrees in Electronic Engineering from The Chinese University of Hong Kong (CUHK) in 2010 and 2012, respectively. He is an Assistant Professor in the Department of Systems Engineering & Engineering Management at CUHK. He is also an Associate Editor for the IEEE Transactions on Signal and Information Processing over Networks. He has held research positions at ASU, UC Davis, Telecom ParisTech, Ecole Polytechnique, MIT. Hoi-To’s research interests are in the broad area of signal processing, machine learning and distributed optimization, with a focus of their applications to network science. His dissertation has received the 2017’s Dean’s Dissertation Award from the Ira A. Fulton Schools of Engineering of ASU and he is a recipient of Best Student Paper Award at ICASSP 2018.
Tangent Bundle Filters and Neural Networks from Manifolds to Cellular Sheaves and Back [slides]
T-HyperGNNs: Hypergraph Neural Networks Via Tensor Representations [slides]
GraphMAD: Graph Mixup for Data Augmentation using Data-Driven Convex Clustering [slides]
Decentralized Graph Based Filter Design Using Normalized Adjacency Matrix [slides]
Recursive Median Filters for Time-Varying Graph Signal Denoising [slides]
Limits of graph neural networks on large random graphs [slides]
Multiscale Hodge Scattering Networks on Simplicial Complexes [slides]
A data-driven graph framework for geometric understanding of deep learning [slides]
Fast Topology Identification from Smooth Graph Signals [slides]
Robust Graph Filter Identification and Graph Denoising from Signal Observations [slides]