
Program Schedule
Wednesday | Thursday | Friday | ||||||
---|---|---|---|---|---|---|---|---|
Speaker 1 | 9:00 - 10:00 | Bastian Rieck | Speaker 1 | 9:00 - 10:00 | Mathilde Papillon | Speaker 1 | 9:00 - 10:00 | Dhananjay Bhaskar |
Break | 10:00-10:20 | Break | 10:00-10:15 | Break | 10:00-10:20 | |||
Contributed Talk A1 | 10:15-10:35 | Contributed Talk C1 | 10:15-10:35 | Contributed Talk D1 | 10:15-10:35 | |||
Contributed Talk A2 | 10:35-10:55 | Contributed Talk C2 | 10:35-10:55 | Contributed Talk D2 | 10:35-10:55 | |||
Contributed Talk A3 | 10:55-11:15 | Contributed Talk C3 | 10:55-11:15 | Contributed Talk D3 | 10:55-11:15 | |||
Speaker 2 | 11:30-12:30 | Reihaneh Rabbany | Lunch | 11:15-13:00 | Speaker 2 | 11:30-12:30 | Dominique Beaini | |
Lunch | 12:30-14:00 | Speaker 2 | 13:00-14:00 | Melanie Weber | Lunch | 12:30-14:00 | ||
Contributed Talk B1 | 14:00-14:20 | Poster Session 2 | 14:00-15:00 | TMLR 1 | 14:00-14:20 | |||
Contributed Talk B2 | 14:20-14:40 | Travel to Excursion | 15:00 - 16:00 | TMLR 2 | 14:20-14:40 | |||
Contributed Talk B3 | 14:40-15:00 | Excursion | 16:00-19:00 | TMLR 3 | 14:40-15:00 | |||
Break | 15:00-15:15 | TMLR 4 | 15:00-15:20 | |||||
Speaker 3 | 15:15-16:15 | Danai Koutra | Break | 15:20-15:30 | ||||
Poster Session 1 | 16:30-17:30 | Speaker 3 | 15:30-16:30 | Naoki Saito | ||||
Dinner | 18:30-20:30 |
General Information
Lectures in the oral sessions are 20-minutes long (including Q&As). We will have two poster sessions, (a) Wednesday 16:30-17:30, and (b) Thursday 14:00-15:00. The format of posters is flexible, but A0 size and portrait orientation are recommended.
Our main excursion will be at the Montreal botanical gardens. Stay tuned for more info!
Talks
Speaker 1, Wednesday 9:00 - 10:00
Bastian Grossenbacher Rieck, University of Fribourg
Title: TBD
Abstract: TBD
Bio: TBD
Speaker 2, Wednesday 11:30 - 12:30
Reihaneh Rabbany, McGill University, Mila – Quebec AI Institute
Title: TBD
Abstract: TBD
Bio: TBD
Speaker 2, Wednesday 11:30 - 12:30
Danai Koutra, University of Michigan, Ann Arbor
Title: TBD
Abstract: TBD
Bio: TBD
Speaker 1, Thursday 9:00 - 10:00
Mathilde Papillon, UC Santa Barbara
Title: Make Any Graph Neural Network Go Topological with TopoTune
Abstract: Graph Neural Networks (GNNs) excel in learning from relational datasets, processing node and edge features in a way that preserves the symmetries of the graph domain. However, many complex systems–such as biological or social networks–involve multiway complex interactions that go beyond the graph’s pairwise relationships. The emerging field of Topological Deep Learning (TDL) aims to accommodate and leverage these higher-order structures. However, differently from the graph deep learning ecosystem, TDL lacks a standardized framework for easily defining new architectures, restricting its accessibility and applicability. In this talk, I will introduce TopoTune, a lightweight software that allows practitioners to define, build, and train general TDL models with unprecedented ease. Theoretical results show that TopoTune’s models generalize and subsume the current landscape of TDL models. Extensive experiments show that these architectures consistently match or outperform previous models, often with less model complexity.
Bio: Mathilde Papillon is a Physics PhD candidate in the Geometric Intelligence Lab at the University of California Santa Barbara where she develops novel deep learning methods leveraging geometry and topology. She harnesses these models to study relational data, with a special focus on full-body human movement. Mathilde obtained her BSc in Honours Physics from McGill University and was recently awarded Canada’s Post Graduate Doctoral Fellowship.
Speaker 2, Thursday 13:00 - 14:00
Melanie Weber, Harvard University
Title: A Geometric Lens on Challenges in Graph Machine Learning: Insights and Remedies
Abstract: Graph Neural Networks (GNNs) are a popular architecture for learning on graphs. While they achieved notable success in areas such as biochemistry, drug discovery, and material sciences, GNNs are not without challenges: Deeper GNNs exhibit instability due to the convergence of node representations (oversmoothing), which can reduce their effectiveness in learning long-range dependencies that are often crucial in applications. In addition, GNNs have limited expressivity in that there are fundamental function classes that they cannot learn. In this talk we will discuss both challenges from a geometric perspective. We propose and study unitary graph convolutions, which allow for deeper networks that provably avoid oversmoothing during training. Our experimental results confirm that Unitary GNNs achieve competitive performance on benchmark datasets. An effective remedy for limited expressivity are encodings, which augment the input graph with additional structural information. We propose novel encodings based on discrete Ricci curvature, which lead to significant gains in empirical performance and expressivity thanks to capturing higher-order relational information. We then consider the more general question of how higher-order relational information can be leveraged most effectively in graph learning. We propose a set of encodings that are computed on a hypergraph parametrization of the input graph and provide theoretical and empirical evidence for their effectiveness.
Bio: Melanie is an Assistant Professor of Applied Mathematics and of Computer Science at Harvard University, where she leads the Geometric Machine Learning Group. Her research studies geometric structure in data and how to leverage such information for the design of new, efficient Machine Learning algorithms with provable guarantees. In 2021-2022, she was a Hooke Research Fellow at the Mathematical Institute in Oxford. Previously, she received her PhD from Princeton University (2021). She is a recipient of the IMA Leslie Fox Prize in Numerical Analysis and a Sloan Research Fellowship in Mathematics.
Speaker 1, Friday 9:00 - 10:00
Dhananjay Bhaskar, Yale University
Title: TBD
Abstract: TBD
Bio: TBD
Speaker 2, Friday 11:30 - 12:30
Dominique Beaini, Valence Labs, Université de Montréal, Mila – Quebec AI Institute
Title: How to Learn Molecules
Abstract: How can we build powerful representation of molecules for drug discovery? We’ll dive in the different algorithms, the challenges that they face in terms of expressivity, and how to overcome them. We’ll discuss building positional and structural encodings, and how they can make GNNs quite expressive without algorithmic changes. Then, we’ll see how to build graph Transformers for pre-training on large molecular datasets, and scale them to infinity. Finally, we’ll step into the world of multi-modality to learn how molecules impact human cells from a morphological perspective.
Bio: Dominique Beaini is a research unit team lead at Valence Discovery, and also serves as an adjunct professor in the Department of Computer Science and Operations Research at Université de Montréal and Mila – Quebec AI Institute. His research interests include graph neural networks, self-supervised learning, quantum mechanics, drug discovery, computer vision, and robotics. Dominique completed his PhD at Polytechnique Montréal, focusing on robotics and computer vision.
Speaker 3, Friday 15:30 - 16:30
Naoki Saito, UC Davis
Title: Generalized Haar-Walsh Dictionaries: Extensions and Applications
Abstract: I will discuss my group’s effort to generalize and extend the classical Haar-Walsh wavelet packet transforms for graphs, matrix data, and images. Using a recursive partitioning of the graph and successive averaging and differencing operations, we proposed the Generalized Haar-Walsh Transform (GHWT) in 2014, which generates an overcomplete dictionary of orthonormal bases of piecewise constant nature. Then, in 2021, we lifted the GHWT for higher adaptability using the method originally developed by Thiele and Villemoes for 1D regular lattices (1996) and that of Lindberg and Villemoes for 2D regular lattices (2000). It significantly improved the previous GHWT with the similar computational cost, O(n log n) where n is a number of vertices of an input graph. While the previous GHWT best-basis algorithm seeks the most suitable orthonormal basis for a given task among more than 1.5^n possible bases, the eGHWT best-basis algorithm can find a better one by searching through more than 0.618*(1.84)^n possible bases. The difference between these two is huge especially for large n. As applications, we will discuss an idea of performing simultaneous segmentation, denoising, and compression of graph signals, texture images, and term-document matrix analysis. If the time permits, I will also describe our recent generalization of the scattering transform—a robust nonlinear signal feature extraction tool—using the GHWT/eGHWT.
Bio: Naoki Saito is an applied mathematician specializing in applied and computational harmonic analysis. He studied at the University of Tokyo, receiving his BEng and MEng in 1982 and 1984, respectively. He joined Nippon Schlumberger K.K. in 1984; in 1986 moved to Schlumberger-Doll Research, Ridgefield, CT, where he was a research scientist. He continued his studies, receiving his PhD in applied mathematics from Yale University in 1994. He began teaching at the Department of Mathematics at the University of California, Davis in 1997, where he is currently a professor and a director of the UC Davis TETRAPODS Institute of Data Science. His honor includes: the Best Paper Awards from SPIE (1994); the Henri Doll Award from Schlumberger (1996); ONR Young Investigator Award (2000); the Presidential Early Career Award for Scientists and Engineers [PECASE] (2000); the Best Paper Award from JSIAM (2016); the Best Author Award from JSIAM (2016). He is a life senior member of IEEE. He also served as Chair of the SIAM Activity Group on Imaging Science from 2013 to 2015, and is a member of the editorial board of the three international journals: Applied and Computational Harmonic Analysis; Inverse Problems and Imaging; Journal of Mathematical Imaging and Vision.
Accepted Papers
Paper Title | Authors | Abstract |
---|---|---|
E(n) Equivariant Topological Neural Networks | Claudio Battiloro (Harvard University)*; Ege Karaismailoğlu (Harvard University); Mauricio Tec (Harvard University); George Dasoulas (Harvard University); Michelle Audirac (Harvard University); Francesca Dominici (Harvard University) | We introduce E(n)-Equivariant Topological Neural Networks (ETNNs), which are E(n)-equivariant message-passing networks operating on combinatorial complexes, formal objects unifying graphs, hypergraphs, simplicial, path, and cell complexes. ETNNs incorporate geometric node features while respecting rotation, reflection, and translation equivariance. ETNNs are natively ready for settings with heterogeneous interactions. We provide a theoretical analysis to show the improved expressiveness of ETNNs over architectures for geometric graphs. The broad applicability of ETNNs is demonstrated through two tasks of vastly different scales: i) molecular property prediction on the QM9 benchmark and ii) land-use regression for hyper-local estimation of air pollution with multi-resolution irregular geospatial data. The results indicate that ETNNs match or surpass SotA equivariant neural networks of the same class with a significantly smaller computational burden. |
Graph based novel features for detection of Alzheimer’s disease using EEG signals | RAMNIVAS SHARMA (MNIT JAIPUR)*; Hemant Kumar Meena (MNIT JAIPUR) | Alzheimer’s disease (AD) is a progressive neurodegenerative disorder leading to dementia and neuronal death. In this study, graph-based features are integrated with traditional approaches to improve the accuracy of EEG-based AD detection. Conventional methods often fail to account for the complex functional connections between brain regions, as they typically analyze EEG channels in isolation. To overcome this limitation, a novel approach is proposed that utilizes graph signal representations of EEG signals, incorporating interrelationships within the brain. Features derived from the graph Fourier transform (GFT) and graph wavelet transform (GWT), along with statistical features such as mean, maximum, minimum, median, standard deviation, and kurtosis, are employed. After preprocessing EEG data from two datasets, machine learning classifiers are applied. RF and CNN yield the highest accuracies, 99.75% and 99.48%, respectively, indicating the efficacy of this approach for AD diagnosis. |
Tensor-Based Graph Neural Networks for Graph Topology Inference | Maggie Cheng (Illinois Institute of Technology)* | Graph neural networks have been the de facto tools for graph learning tasks. One basic function for graph learning is to learn the link topology of a graph. The ability to infer graph topology is imperative to many graph classification, regression, and even generation tasks. In this work, we propose a new set of orthogonal bases to construct linear layers for the graph neural networks. This set of orthogonal bases is defined for GNNs based on order-3 tensors and is enough to provide sufficient discriminative power for achieving $3$-WL expressiveness. |
Sparse polynomial approximation of diffusion on community-structured graphs | Giuseppe Alessio D'Inverno (International School for Advanced Studies, Trieste)*; Simone Brugiapaglia (Concordia University, Montréal); Kylian Ajavon (Concordia University, Montréal) | Diffusion kernels over graphs have been widely utilized as effective tools in various applications due to their ability to accurately model the flow of information through nodes and edges. However, there is a notable gap in the literature regarding the development of surrogate models for diffusion processes on graphs. In this work, we fill this gap by proposing sparse polynomial-based surrogate models for parametric diffusion equations on graphs with community structure. In tandem, we provide convergence guarantees for both least squares and compressed sensing-based approximations by showing the holomorphic regularity of parametric solutions to these diffusion equations. Our theoretical findings are accompanied by a series of numerical experiments conducted on both synthetic and real-world graphs that demonstrate the applicability of our methodology. |
Crop Classification in Satellite Images via Signed Graph Spectral Classifier Learning | Parham Eftekhar (York University)*; Gene Cheung (York University); Tim Eadie (Growers Edge) | Crop classification in satellite images is vital for remote crop monitoring. We propose a novel signed graph spectral algorithm with supervised feature learning for binary classification. During training, we extract pixel-wise feature vectors via a shallow CNN from spectral bands. Our contrastive learning objective minimizes feature distances for same-label pixels while maximizing them for different-label pairs. At inference, we compute feature vectors and map non-negative distances to signed edge weights via a shifted logistic function, forming a signed graph encoding pairwise (dis)similarities. We then compute the first eigenvector of a convex combination of the signed and combinatorial graph Laplacians in linear time, using its entry signs for classification. Experiments show our method achieves performance comparable to deep learning models with significantly fewer parameters. |
Tracking Network Dynamics using Probabilistic State-Space Models | Víctor Tenorio (King Juan Carlos University)*; Elvin Isufi (TU Delft); Geert Leus (TU Delft); Antonio G. Marques (King Juan Carlos University) | This paper introduces a probabilistic approach for tracking the dynamics of unweighted and directed graphs using state-space models (SSMs). Unlike conventional topology inference methods that assume static graphs and generate point-wise estimates, our method accounts for dynamic changes in the network structure over time. We model the network at each timestep as the state of the SSM, and use observations to update the beliefs that quantify the probability of the network being in a particular state. Then, the proposed method incorporates the information of real-time graph signals into the beliefs. These beliefs provide a probability distribution of the network at each timestep, providing both an estimate of the network and the uncertainty it entails. Our approach is evaluated through experiments with synthetic and real-world networks. The results demonstrate that our method effectively estimates network states, outperforming traditional techniques such as recursive least squares. |
Efficient Learning of Balanced Signed Graphs via Iterative Linear Programming | Haruki Yokota (The University of Osaka)*; Hiroshi Higashi (The University of Osaka); Yuichi Tanaka (The University of Osaka); Gene Cheung (York University) | Signed graphs incorporate both positive and negative edge weights to represent positive/negative correlations in data. When a signed graph is balanced, eigenvectors of its graph Laplacian matrix can be linearly transformed to those of a positive graph Laplacian. This allows the reuse of spectral filters designed for positive graphs to signed graphs. We propose an efficient algorithm to learn a balanced signed graph Laplacian directly from data, by extending a sparse inverse covariance estimation problem with linear constraints to enforce sign consistency of the edges. Experiments on real data demonstrate superior performance over existing methods. |
Higher-Order Semi-Supervised Learning on Point Clouds Using Hypergraphs | Adrien Weihs (University of California Los Angeles)* | We propose a higher-order hypergraph method for semi-supervised learning on point clouds. This is motivated by the fact that the classical hypergraph learning algorithm is asymptotically equivalent to p-Laplace Learning on graphs. Our new framework includes additional hypergraph geometric information by penalizing higher-order derivatives on hyperedges. We also preserve the quadratic form structure of Laplace Learning which greatly simplifies numerical implementations and we can reduce computational complexity through spectral truncation. In addition, this allows us to formulate the learning problem in the Bayesian setting. We present numerical results demonstrating the effectiveness of our methodology compared to other graph-based semi-supervised learning methods. |
Multimodal Graphs for Diffusion-Based Manifold Alignment | Jake Rhodes (Brigham Young University)*; Adam Rustad (Brigham Young University) | Data from various sources are often intrinsically linked. Multimodal data integration enriches information content over single-source data. Manifold alignment seeks a shared low-dimensional representation of multiple data sources, emphasizing similarities. Semi-supervised manifold alignment relies on partial correspondences, either through shared features or known associations. We introduce two, graph-based alignment methods: SPUD (Shortest Paths on the Union of Domains) constructs a unified graph via known correspondences to learn inter-domain geodesic distances. MASH (Manifold Alignment via Stochastic Hopping) captures local geometry and iteratively refines inter-domain correspondences via a diffusion process. MASH forms a coupling matrix linking heterogeneous domains. We compare SPUD and MASH with existing methods, showing superior alignment and cross-domain classification, and demonstrate label transfer applications. |
Landmark-Based Node Embeddings and Graph Distance Approximations | My Le (Johns Hopkins University)*; Luana Ruiz (Johns Hopkins University); Souvik Dhara (Purdue University) | Learning node representations is a core problem in graph machine learning. Although existing embedding methods effectively capture local similarity measures, they often fall short when it comes to preserving global functions such as graph distances. Motivated by Bourgain's pioneering work on Hilbert space embeddings of metric spaces (1985), we examine the performance of local distance-preserving node embeddings. These embeddings, known as landmark-based algorithms, approximate pairwise distances by calculating shortest paths from a small subset of reference nodes, called landmarks. Our primary theoretical contribution demonstrates that random graphs, such as Erdős–Rényi random graphs, require lower dimensions for landmark-based embeddings compared to worst-case graphs. Additionally, we show empirically that the GNN-based approximations of distances to landmarks generalize effectively to larger networks, providing a scalable solution for graph representation learning. |
Leveraging Graph-Based Novel Features for Accurate Multi-Grade Brain Tumor Classification | SUMAN DIP (MALVIYA NATIONAL INSTITUTE OF TECHNOLOGY)*; Hemant Kumar Meena (Malaviya National Institute of Technology, Jaipur) | Detecting multiclass brain tumors is difficult due to inter-class similarities in size, shape, and texture. Traditional methods fail to capture neighborhood relationships and structural differences. We use Graph Signal Processing (GSP) to model tumor structures via Gaussian, Absolute Correlation, Absolute Covariance, and Fundis-based edge techniques. The Absolute Correlation-based method outperforms others by preserving spatial interdependencies among pixel intensities. Features are balanced using SMOTE, reduced with LDA, and classified with boosting models using five-fold cross-validation. Our approach achieves 97.27% accuracy for three-class and 98.63% for fifteen-class classification on two public datasets, surpassing traditional methods. By integrating graph-based representations with machine learning, our method enhances tumor detection and classification, setting a new benchmark in medical imaging. |
Online Learning for State-dependent Causal Graph Process Model | Yuzhe Li (Tsinghua University); Hong Zhao (Tsinghua University)* | Graph Signal Processing (GSP) theory is a powerful framework for reconstructing graph structures from observed data. While significant progress has been made in the field of graph learning, the majority of existing studies focus on static graphs. However, dynamic graphs are more prevalent in real-world applications, necessitating the extension of GSP tools, theories, and algorithms to accommodate such scenarios. In this work, we extend the existing causal graph process model to dynamic weight scenarios. A state-dependent causal graph process model, along with its learning algorithm, is proposed for graph signal time series. Preliminary experimental results demonstrate the effectiveness of the proposed method. |
Learning Graphs from Structural Vector Autoregressions with Sparse Input | Panagiotis Misiakos (ETH Zurich)*; Markus Püschel (ETH Zurich) | We introduce SpinSVAR, a novel method for estimating a structural vector autoregression (SVAR) from time-series data under sparse input assumption. Unlike prior approaches using Gaussian noise, we model the input as independent Laplacian variables, enforcing sparsity and yielding a maximum likelihood estimator (MLE) based on least absolute error regression. We provide theoretical consistency guarantees for the MLE under mild assumptions. SpinSVAR is efficient: it can leverage GPU acceleration to scale to thousands of nodes. On synthetic data with Laplacian or Bernoulli-uniform inputs, SpinSVAR outperforms state-of-the-art methods in accuracy and runtime. When applied to S&P 500 data, it clusters stocks by sectors and identifies significant structural shocks linked to major price movements, demonstrating the viability of our sparse input assumption. |
Orthogonal L1 Digraph Fourier Basis via Total Variation | Vedran Mihal (ETH Zurich)*; Markus Püschel (ETH Zurich) | We propose a novel orthogonal Fourier basis for arbitrary directed graphs based on iterative maximization of the L1-norm total variation. We leverage algorithms from L1-norm principal component analysis for its efficient computation, which also implies that the basis always exists. We prove that the basis covers the range of total variations better than prior bases and demonstrate its effectiveness in a prototypical signal recovery experiment. |
Enabling Out-of-Sample Extension in Semi-Supervised Manifold Alignment through Twin Autoencoders | Jake Rhodes (Brigham Young University)*; Adam Rustad (Brigham Young University); Marshall Nielsen (Brigham Young University) | Manifold alignment seeks a shared representation that captures inter-domain relationships while preserving intra-domain structures. A key limitation of traditional approaches is the lack of an out-of-sample extension mechanism, requiring full recomputation when new data is introduced. To address this, we propose an out-of-sample extension framework using a twin autoencoder (AE) architecture, where each AE is trained on a single modality and regularized with a pre-aligned joint embedding. This structure enables seamless extension of new data points while maintaining geometric consistency. We validate our method on bimodal datasets, demonstrating robust alignment preservation. |
Identifying Adversarial Attacks in Crowdsourcing via Dense Subgraph Detection | Abdullah Karaaslanli (Michigan State University)*; Panagiotis Traganitis (Michigan State University); Aritra Konar (KU Leuven) | Crowdsourcing is becoming increasingly important for contemporary applications in machine learning and artificial intelligence. However, crowdsourcing systems may be susceptible to adversarial attacks where a subset of annotators deliberately provide erroneous responses. This paper introduces a novel algorithm for identifying adversarial attacks in crowdsourcing systems by recasting the problem as dense subgraph detection in bipartite graphs. In particular, we represent crowdsourced data as a weighted bipartite graph between workers and data points where edge weights are computed from annotators' responses. The bipartite graph is then analyzed with a peeling algorithm to detect the dense subgraph that includes adversarial attacks. Compared to previous methods where only adversaries are detected, our proposed method can simultaneously identify adversarial annotators as well as affected data points. Preliminary results on real datasets showcase the potential of this novel approach. |
Differentially Private Decentralized PCA of Graph Signals for Graph Clustering | Andrew Campbell (Cornell University)*; Anna Scaglione (as337@cornell.edu); Sean Peisert (Lawrence Berkeley National Laboratory) | We address the problem of decentralized principal component analysis (DPCA) of graph signals and its application to graph clustering of graphical data under privacy constraints. Leveraging graph data smoothness, node clustering reduces to computing principal eigenvectors of the sample covariance, followed by $k$-means clustering. We develop a distributed mechanism for this computation with differential privacy (DP) guarantees. Unlike the existing approaches, we consider a setting where each agent can only access sub-graph signals. To handle this complexity, we propose a decentralized and DP power method for computing the eigenvectors of the entire graph signal sample correlation. Our method leverages consensus aggregation, while requiring each agent to track only its portion of the global eigenvectors. This method enhances the privacy guarantees while reducing communication overhead. Finally, we establish $(\epsilon, \delta)$-DP bounds and demonstrate its efficacy through a simulation. |
Graph Signal Matching for Image Registration | Hang Liu (Cornell University)*; Anna Scaglione (Cornell University); Urbashi Mitra (University of Southern California) | We propose a graph signal processing (GSP) approach for simultaneous pose and correspondence estimation in image registration. By modeling point sets as graph signals on k-nearest-neighbor (kNN) graphs, we recast the correspondence estimation task into a graph signal matching problem. We employ a spectral matching step to recover the unknown permutation, followed by a least-squares solution for the rigid transformation. Numerical experiments on 3D scanning data demonstrate that our method achieves superior registration accuracy compared to baseline algorithms, thus validating its effectiveness in both 3D-2D image registration scenarios. |
DAG structure learning based on non-negative weights | Samuel Rey (King Juan Carlos University)*; Seyed Saman Saboksayr (Rochester University); Gonzalo Mateos (Rochester University) | We address the problem of learning the topology of directed acyclic graphs (DAGs) from nodal observations in a linear structural equation model. Recent methods frame this task as a continuous optimization problem but struggle with non-convexity. To overcome this, we assume non-negative edge weights, allowing cycles to be characterized and prevented using a convex acyclicity function based on the log-determinant of the adjacency matrix. This reformulates the problem as a convex optimization task. We propose a DAG recovery algorithm using the method of multipliers, guaranteeing a global minimizer. Moreover, we prove that with infinite samples, our approach recovers the true DAG structure. Empirical validation on synthetic data shows our algorithm outperforms state-of-the-art alternatives. |
Unrolled Denoising of Attributes on Graphs with Local-and-Global Smoothness Assumption | REINA KANEKO (The University of Osaka)*; Junya Hara (The University of Osaka); Hiroshi Higashi (The University of Osaka); Yuichi Tanaka (The University of Osaka) | This paper presents a denoising method for attributes on graphs by using a local-and-global smoothness assumption. Restoration of graph signals is one of the most fundamental studies in graph signal processing. Many work assumes graphs are noiseless, however, both of the graphs and signal values may be noisy. In this paper, we address denoising of attributes by assuming local and-global smoothness nature of graph signals: They are locally smooth on the graph and are globally smooth in an appropriate space regardless of the graph. We formulate a corresponding convex problem that can be iteratively solved with a monotone operator splitting algorithm. Moreover, the iterations can be unrolled to apply deep algorithm unrolling (DAU) for tuning internal parameters automatically from the training data. In experiments, our method exhibits higher restoration performance in PSNR than existing model-based and DAU-based methods even when coordinates are corrupted by noise. |
On multi-scale Graph Representation Learning | Christian Koke (Technical University Munich)*; Daniel Cremers (Technical University Munich) | While Graph Neural Networks (GNNs) are widely used in modern computational biology, an underexplored drawback of common GNN methods,is that they are not inherently multiscale consistent: Two graphs describing the same object or situation at different resolution scales are assigned vastly different latent representations. This prevents graph networks from generating data representations that are consistent across scales. It also complicates the integration of representations at the molecular scale with those generated at the biological scale. Here we discuss why existing GNNs struggle with multiscale consistency and show how to overcome this problem through carefully restricting learnable graph filters and selecting the appropriate graph shift operator. |
Period Estimation for Time-Varying Graph Signals Based on Cyclic Graph Wide Sense Stationarity | Tsutahiro Fukuhara (The University of Osaka)*; Junya Hara (The University of Osaka); Hiroshi Higashi (The University of Osaka); Yuichi Tanaka (The University of Osaka) | This paper proposes a period estimation method for time-varying graph signals (TVGSs). In many network-based measurement systems, TVGSs are stationary in the spatial domain and also exhibit periodic stationarity in the time domain. Although the accurate estimation of the period of TVGSs would result in accuracy gains in applications, most existing works are limited in addressing this issue. First, we introduce \textit{cyclic graph wide sense stationarity} on TVGSs where signals are stationary in the spatial domain and periodically stationary in the time domain. Second, we propose a period estimation method. It consists of two phases: 1) we extract periodic features from TVGSs by decomposing a set of TVGSs using graph Fourier transform and a nested periodic dictionary, and 2) we estimate a period of TVGSs based on the extracted features. Numerical experiments on real-world data demonstrate that our method effectively estimates the period. |
Online Learning Of Expanding Graphs | Samuel Rey (King Juan Carlos University)*; Bishwadeep Das (TU Delft); Elvin Isufi (TU Delft) | This paper addresses online network topology inference for expanding graphs using spatiotemporal signal streams. Unlike prior works that assume a fixed set of nodes, we consider dynamically growing graphs, which introduce challenges in modeling temporal dynamics and increase computational complexity. We propose an online algorithm based on projected proximal gradient descent that adapts to graph expansion by recursively updating the sample covariance matrix. Our method differentiates updates for new and existing nodes. We specialize our approach in Gaussian Markov random fields, analyzing computational complexity and dynamic cumulative regret. Finally, we validate our method through controlled experiments and real-world datasets from epidemic and financial networks. |
Online Network Inference from Graph-Stationary Signals with Hidden Nodes | Andrei Buciulea Vlas (Universidad Rey Juan Carlos)*; Madeline Navarro (Rice University); Samuel Rey Escudero ( Universidad Rey Juan Carlos); Santiago Segarra (Rice University); Antonio García Marqués (Universidad Rey Juan Carlos) | Graph learning is the fundamental task of estimating unknown graph connectivity from available data. Traditional methods assume full, simultaneous data availability, but real-world scenarios often involve incomplete, sequential data. We propose an online graph estimation method that accounts for hidden nodes. By leveraging signals stationary on the underlying graph, we model unknown connections to hidden nodes. We formulate a convex optimization problem for graph learning from streaming, incomplete signals and solve it via an efficient proximal gradient algorithm that operates in real-time. We establish theoretical conditions ensuring that our online method aligns with batch solutions. Experiments on synthetic and real data validate the effectiveness of our approach for online graph learning with missing observations. |
Graph Signal Processing via the Holomorphic Functional Calculus | Christian Koke (Technical University Munich)* | On undirected graphs, a well-defined unitary graph Fourier transform allows to construct filters associated to arbitrary real valued functions. On directed graphs, such a unitary transform is unavailable and filters arise as polynomials in the graph shift operator. In this work, we unify these distinct approaches using tools from complex analysis. From a theoretical standpoint this allows us to significantly extend the class of available filter functions on directed graphs. From the viewpoint of algebraic signal processing, this extended algebra of filters then remains closed even in the inductive graph learning, setting where graphs are allowed to vary. Furthermore, we are able to provide a frequency-response interpretation of filters on directed graphs akin to the one in the undirected setting. From a practical standpoint, we establish that convolutional networks conforming to the newly developed theory achieve new state of the art results on standard graph learning benchmarks. |
A Random Dot Product Graph Model for Weighted and Directed Networks | Bernardo Marenco (Udelar)*; Paol Bermolen (Udelar); Marcelo Fiori (Udelar); Federico La Rocca (Udelar); Gonzalo Mateos (University of Rochester) | The Random Dot Product Graph (RDPG) model assigns a low-dimensional vector to each vertex, and postulates that an edge between any two nodes exists with probability given by the inner product of said vectors. Recently, this latent position model has been extended to account for weighted graphs (the so-called Weighted (W-)RDPG), now embedding each node with a sequence of vectors. However, graphs adhering to this nonparametric W-RDPG model are constrained to be undirected and homophilic. In this work, we extend the model's expressivity by proposing a variant for directed graphs, which may also include heterophilic nodes. To this end, we endow each vertex with two sequences, respectively modeling the node's incoming and outgoing connectivity behavior. The effectiveness of the novel weighted and directed RDPG model is illustrated via several test cases, including both synthetic and real-life networks. |
Accelerated graph clustering with PASCO: Parallel structured coarsening and partition alignment by optimal transport | Etienne Lasalle (Inria, ENS de Lyon, CNRS, UCB Lyon 1, LIP, UMR 5668); Rémi Vaudaine (Inria, ENS de Lyon, CNRS, UCB Lyon 1, LIP, UMR 5668); Titouan Vayer (Inria, ENS de Lyon, CNRS, UCB Lyon 1, LIP, UMR 5668); Pierre Borgnat (CNRS, ENS Lyon)*; Paulo Gonçalves (Inria, ENS de Lyon, CNRS, UCB Lyon 1, LIP, UMR 5668); Rémi Gribonval (Inria, ENS de Lyon, CNRS, UCB Lyon 1, LIP, UMR 5668); Marton Karsai (Department of Network and Data Science, Central European University, Vienna) | We develop a method, called PASCO, to accelerate clustering algorithms. It has three steps: 1- We compute several independent small graphs representing the input graph by applying an efficient and structure-preserving coarsening algorithm. 2- A clustering algorithm is run in parallel onto each small graph and provides several partitions of the initial graph. 3- These partitions are aligned and combined with an optimal transport method to output the final partition. This framework relies on two key contributions: a novel global algorithm structure designed to enable parallelization and a fast, empirically validated graph coarsening algorithm that preserves structural properties. We demonstrate the strong performance of PASCO in terms of computational efficiency, structural preservation, and output partition quality, evaluated on both synthetic and real-world graph datasets. |
Estimating fair graphs with unbiased connections | Madeline Navarro (Rice University)*; Samuel Rey (King Juan Carlos University); Andrei Buciulea (King Juan Carlos University); Antonio Marques (King Juan Carlos University); Santiago Segarra (Rice University) | We propose estimating graphs that are fair with respect to sensitive nodal attributes. Many real-world models exhibit unfair discriminatory behavior due to biases in data. Such discrimination is known to be exacerbated when data is equipped with pairwise relationships encoded in a graph. We thus propose an optimization-based approach to accurately infer graphs while discouraging biased edges. To this end, we present bias metrics that measure topological demographic parity to be applied as convex penalties. We also propose an efficient proximal gradient algorithm to obtain the estimates. Theoretically, we express the tradeoff between fair and accurate estimated graphs. Critically, this includes demonstrating when accuracy can be preserved in the presence of a fairness regularizer. Our empirical validation includes synthetic and real-world simulations that illustrate the value and effectiveness of our proposed optimization problem and iterative algorithm. |
Bilevel Optimization for Scalable and Robust Graph Neural Network Training | Chenyue Zhang (The Chinese University of Hong Kong)*; Anna Scaglione (Cornell University); Tianyi Chen (Rensselaer Polytechnic Institute); Hoi-To Wai (The Chinese University of Hong Kong) | In this paper, we propose a bilevel optimization framework to robustly train graph neural networks (GNNs) from multiple subgraph or multi-graph datasets. Our approach jointly learns the GNN parameters and subgraph selection weights, identifying and discarding noisy or distributionally misaligned graph datasets that could hinder model performance. By reformulating the training objective under a bilevel setup, we leverage a penalty-based algorithm that converges to a stationary point with manageable computation complexity. Experiments confirm that our method effectively boosts prediction accuracy for various tasks, highlighting its scalability and robustness in real-world graph machine learning scenarios. |
Real-Time Car Classification using mmWave Radar with Graph-Based Signal Processing on FPGA | Anand Mohan (Malaviya National Institute of Technology)*; Hemant Kumar Meena (Malaviya National Institute of Technology) | In ADAS, identifying and classifying vehicles is vital for features like traffic sign recognition, adaptive cruise control, and collision avoidance. Current systems using LiDAR and cameras struggle in bad weather. A new system using Millimeter Wave (mmWave) radar works in all conditions and runs on FPGA and software. The radar creates 3D point clouds of vehicles, simplified into 2D images for feature extraction. Radar point clouds are irregular, making analysis challenging. The system uses Graph Signal Processing (GSP), specifically the Graph Wavelet Transform (GWT), to handle this irregularity. After feature extraction, vehicles are classified using machine learning, with the K-Nearest Neighbor (k-NN) algorithm achieving 99.55% accuracy. This FPGA-based system, combining mmWave radar and GWT, provides a robust, real-time solution for vehicle classification, especially in challenging weather where other technologies fail. |
Data-Driven Graph Filters via Adaptive Spectral Shaping | Dylan Sandfelder (University of Oxford)*; Mihai Cucuringu (University of California, Los Angeles); Xiaowen Dong (University of Oxford) | We present a data-driven framework for learning multi-scale filters on graphs via adaptive spectral shaping, addressing the pervasive challenge of heterogeneous frequency content in graph signals. Our model learns a data-driven parametric kernel that can adapt to different types of spectral regions. By combining this mechanism with a multi-scale extension, we capture signals that exhibit multiple peaks across the graph spectrum. Our implementation leverages polynomial approximations of the Laplacian to avoid costly eigendecompositions, making the approach scalable to larger graphs. Experimental results on synthetic datasets show that our adaptive, multi-scale filters outperform standard single-scale filters in reconstructing graph signals with complex spectra. These findings highlight the potential of incorporating data-driven designs into multi-scale graph signal processing to get more accurate analysis in networked systems, such as sensor arrays, communication networks, and beyond. |
Graph Wavelet Neural Networks for EEG-Based Seizure Detection | Wei Cao (Bishop's University)*; Rachid Hedjam (Bishop's University) | Electroencephalography (EEG) is a widely used, non-invasive tool for seizure detection, but manual annotation is time-consuming and requires medical expertise. This study explores the use of Adaptive Graph Wavelet Neural Networks (AGWNNs) for EEG-based seizure detection, leveraging their ability to efficiently capture localized and multi-scale electrical signal patterns in the brain. With a proper selection of wavelet kernel functions, we devised an attention mechanism to flexibly adjust the weights of different scales of graph wavelets. Unlike traditional deep learning models that fail to incorporate spatial dependencies between EEG electrodes, AGWNNs provide an effective way to model EEG signals as graphs, preserving multi-context relationships. Experiments on the Temple University Hospital Seizure Detection (TUSZ) dataset illustrate that AGWNNs outperform other state-of-the-art models. |
Forest Proximity Graphs for Time Series | Ben Shaw (Utah State University)*; Jake Rhodes (Brigham Young University); Soukaina Boubrahimi (Utah State University); Kevin Moon (Utah State University) | RF-GAP has recently been introduced as an improved random forest proximity measure. In this paper, we present PF-GAP, an extension of RF-GAP proximity graphs to proximity forests, an accurate and efficient time series classification model. We use the forest proximity graphs alongside Local Outlier Factors to investigate the connection between misclassified points and outliers, comparing with nearest neighbor classifiers which use time series distance measures. We show that the forest proximities seem to exhibit a stronger connection between misclassified points and outliers than nearest neighbor classifiers. |
HyperNATE: Scaling Tensor-Based Hypergraph Neural Networks Through Attention | Nicolas Bello (University of Delaware)*; Fuli Wang (University of Delaware); Daniel Lau (University of Kentucky); Gonzalo Arce (University of Delaware) | Hypergraphs are vital for modeling high-order relationships in complex systems across various domains.This has spurred significant research interest in hypergraph neural networks. Currently, existing hypergraph neural networks face two key challenges: (1) tensor-based representations preserve rich structural information but incur heavy computational costs during training due to message passing operations, and (2) prevalent homophily assumptions fail to model the growing number of heterophilic datasets. To address these gaps, we introduce the Hypergraph Neighborhood Aggregation Transformer Encoder (HyperNATE), which decouples tensor-based message aggregation from training and leverages an attention mechanism to enable parallel multi-hop aggregation.Additionally, a high-pass filter is incorporated to effectively capture heterophilic features.Empirical evaluations on node classification tasks show that HyperNATE is 10-100x faster in training than tensor-based approaches. |
Peer-to-Peer Learning Dynamics of Wide Neural Networks | Shreyas Chaudhari ( Carnegie Mellon University); Srinivasa Pranav (Carnegie Mellon University)*; Emile Anand (Georgia Institute of Technology); José M. F. Moura (Carnegie Mellon University) | Peer-to-peer learning is a framework that enables beyond-5G distributed edge devices to collaboratively train deep neural networks in a privacy-preserving manner without the aid of a central server. The neural network training algorithms have many design considerations that are difficult to tune in deployment settings, such as neural network architectures and hyperparameters. This presents a need for characterizing training dynamics of distributed optimization algorithms used to train nonconvex neural networks in peer-to-peer learning environments. In this work, we provide an explicit characterization of learning dynamics of wide neural networks trained using popular distributed gradient descent (DGD) algorithms. Our results leverage recent advancements in neural tangent kernel (NTK) theory and extensive previous work on distributed learning and consensus. Our methods accurately predict parameter and error dynamics of wide neural networks trained for classification tasks. |
Accelerating Research with Graph Signal Processing and AI-Enhanced Knowledge Graphs | Thomas Kerby (Utah State University); Benjamin Fuller (Utah State University); Kevin Moon (Utah State University)* | In an era of exponential research growth and rapid technological advances, traditional literature review and synthesis methods face significant challenges. We present an open source software package that leverages graph signal processing (GSP) and advanced natural language processing (NLP) techniques. Our tool builds a dynamic knowledge/citation graph from a topic, a set of academic papers, or a BibTeX file using the Semantic Scholar API with the Neo4j graph database. By integrating Neo4j vector databases with LangGraph, it seamlessly connects to large language models (LLMs) to create interactive research agents. This framework streamlines navigation of complex research landscapes and introduces a novel method to compare traditional citation graphs with semantic graphs derived from embedding distances, offering fresh insights into paper influence and revealing latent connections overlooked by conventional bibliometrics. |
Gradient Networks | Shreyas Chaudhari ( Carnegie Mellon University); Srinivasa Pranav (Carnegie Mellon University)*; José M. F. Moura (Carnegie Mellon University) | Directly parameterizing and learning gradients of functions has applications in inverse problems, generative modeling, and optimal transport. We introduce gradient networks (GradNets): neural network architectures that parameterize gradients of various function classes. We provide a GradNet design framework that includes methods for transforming them into monotone gradient networks (mGradNets), which represent gradients of convex functions. Our results establish that GradNets (and mGradNets) universally approximate gradients of (convex) functions. Furthermore, GradNets can be customized to correspond to specific spaces of potential functions, including transformed sums of (convex) ridge functions. We propose two distinct GradNet architectures and describe the corresponding monotone versions. Our empirical results demonstrate that these architectures outperform existing methods by up to 15 dB in gradient field tasks and by up to 11 dB in Hamiltonian dynamics learning tasks. |
FedBaF: Federated Learning Aggregation Biased by a Foundation Model | Jong-Ik Park (Carnegie Mellon University); Srinivasa Pranav (Carnegie Mellon University)*; José M. F. Moura ( Carnegie Mellon University); Carlee Joe-Wong (Carnegie Mellon University) | Foundation models are now a major focus because they generalize across diverse tasks. Existing approaches for adapting foundation models to new applications often rely on Federated Learning (FL) and disclose the foundation model weights to clients when using it to initialize the global model. These methods ensure client data privacy, but compromise model and information security. We introduce Federated Learning Aggregation Biased by a Foundation Model (FedBaF), a method for dynamically integrating pre-trained foundation model weights during the FL aggregation phase. FedBaF keeps the foundation model confidential while still leveraging it to train more accurate models, especially in non-IID and adversarial scenarios. Our comprehensive experiments use Pre-ResNet and foundation models like Vision Transformer to demonstrate that FedBaF matches and often surpasses traditional weight initialization methods by up to 11.4% in IID and up to 15.8% in non-IID settings. |
Leveraging Line Graph Representations to Improve GNN Convergence Rates | Roxanne Holden (JHU)*; Luana Ruiz (JHU) | A graphon is a representation of ``families" of graphs and describes the limiting objects for sequences of large, finite graphs which are applicable to many areas including Graph Neural Networks (GNNs). Specifically, graphons in convergent graph sequences can be used to analyze the convergence and transferability of GNNs. Currently, there is a tradeoff between restrictive continuity assumptions of the graphon and increased computational cost when studying the convergence of the spectra of the graph to the spectra of the graphon. In this work, we propose a study of the rate of convergences of a graph and its associated line graph to improve convergence bounds applicable to GNNs. |
Consistent Identification of Top-K Nodes in Noisy Networks | Hui Shen (McGill University)*; Eric Kolaczyk (McGill University) | Identifying important nodes in a network, often measured by centrality, is a key challenge in network analysis. However, real-world networks are often noisy or incomplete, distorting rankings and misidentifying key nodes. In this paper, we examine how network noise impacts the accurate recovery of top-k nodes. Specifically, we consider a model where the observed network is a noisy version of a true network, with edges randomly added or removed according to a probabilistic noise model. Under this framework, we establish conditions for consistent recovery of top-k nodes and assess their feasibility across common network models. We also explore the challenges of detecting key nodes under high noise and discuss broader implications. For cases between consistency and infeasibility, we propose a confidence set approach to ensure key nodes are included with high probability. We also provide an initial analysis of eigenvector centrality, on its robustness to spectral perturbations from noise. |
Robustness of Graph Topology Learning with Smooth Signals under Partial Observations | Hoang-Son Nguyen (Oregon State University)*; Hoi-To Wai (The Chinese University of Hong Kong) | Recently, many sophisticated algorithms have been proposed for graph topology learning from partial observations. Most of them have relied on advanced structures such as low-rankness and sparsity, but would otherwise require the number of unobserved nodes to be significantly smaller than the graph size. The aim of this ongoing work is to demonstrate theoretically that simple graph topology learning methods are implicitly robust to partial observations of low pass filtered graph signals. We achieve this result through extending the RIP property for the Dirichlet energy function. We show that smoothness-based graph learning formulation on partial observations is able to learn the ground truth graph topology corresponding to the observed nodes. |
Interpretable Lightweight Transformer via Unrolling of Learned Graph Smoothness Priors | Tam Thuc Do (York University)*; Parham Eftekhar (York University); Seyed Alireza Hosseini (York University); Gene Cheung (York University); Philip A. Chou (packet.media) | We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms that minimize graph smoothness priors---e.g., the $\ell_1$-norm graph total variation (GTV)---subject to an interpolation constraint. The insight is that a normalized signal-dependent graph learning module amounts to a variant of the basic self-attention mechanism in conventional transformers. Unlike ``black-box'' transformers that require learning huge parameter sets to compute the affinities and subsequent output embeddings, our unrolled networks employ shallow CNNs to learn low-dimensional features per node to construct similarity graphs. At each layer, the target interpolated signal is a low-pass graph filtered output derived from the minimization of an assumed graph smoothness prior, leading to a reduction in parameter count. Experiments for two image interpolation applications verify the restoration performance and parameter efficiency of our unrolled networks. |
Joint Time-varying Graph Estimation / Signal Interpolation via Smoothness and Low-Rank Priors | Saghar Bagheri (York University)*; Gene Cheung (York University); Tim Eadie (Growers Edge); Antonio Ortega (University of Southern California) | In graph signal processing scenarios where node similarities evolve over time, the underlying graph structure must adapt. We model small changes between consecutive adjacency matrices as a low-rank matrix. Specifically, given an initial adjacency W1 at t=1, we jointly interpolate a signal x2 and estimate W2 at t=2 by leveraging a graph signal smoothness prior and a low-rank prior on W2-W1. Our approach alternates between computing x2 with fixed W2 using conjugate gradient and updating W2 with fixed x2 via a variant of proximal gradient descent. We show that the proximal mapping for rank(W2-W1) can be efficiently approximated in linear time using a fast greedy algorithm, where a few rank-1 update matrices capture key node-to-node interactions. By unrolling algorithm iterations into layers, we construct a lightweight neural network for data-driven parameter tuning. Experiments demonstrate that our joint optimization outperforms existing graph learning methods in signal interpolation. |
InfoGain: Furthering the Design of Diffusion Wavelets for Graph-Structured Data | David Johnson (Boise State University)*; Michael Perlmutter (Boise State University); Smita Krishnaswamy (Yale University) | Diffusion wavelets extract information from graph signals at different scales of resolution by utilizing graph diffusion operators raised to various powers, known as diffusion scales. Traditionally, the diffusion scales are chosen to be dyadic integers, $\mathbf{2^j}$. Here, we propose a novel, unsupervised method for selecting the diffusion scales based on ideas from information theory. We then show that our method can be incorporated into wavelet-based GNNs via graph classification experiments. |
Causal Analysis of Graph Signals for Brain Effectome Inference | Srikar Mutnuri (University of Virginia)*; Aniruddha Adiga (University of Virginia); Srinivasan Venkatramanan (University of Virginia); Madhav V. Marathe (University of Virginia) | Understanding the directed interactions between brain regions is critical for analyzing neuro-degenerative diseases like Alzheimer's (AD). Traditional functional connectivity (FC) methods capture statistical associations but fail to infer causal relationships, limiting their ability to reveal structural disruptions in disease progression. In this exploratory work, we propose a causal graph inference and spectral analysis framework for EEG signals. Specifically, we leverage Granger causality and spectral graph methods to construct and analyze the effective connectome (effectome) of the brain. Our work reveals that AD networks exhibit lower algebraic connectivity (λ2), reduced modularity (eigenvalue gaps), and increased structural sensitivity to perturbations, compared to healthy individuals. Additionally, we simulate diffusion processes on the inferred graph topology to model signal propagation, demonstrating disrupted information flow in AD-affected networks. |
Learning Graph Geometry and Topology Using Dynamical Systems Based Message-Passing | Dhananjay Bhaskar (Yale University); Xingzhi Sun (Yale University)*; Yanlei Zhang (Mila - Quebec AI Institute); Charles Xu (Massachusetts Institute of Technology); Oluwadamilola Fasina (Yale University); Guy Wolf (Université de Montréal); Maximilian Nickel (Meta AI Research (FAIR)); Michael Perlmutter (Boise State University); Smita Krishnaswamy (Yale University) | We introduce DYMAG, a novel, efficient message-passing paradigm for Graph Neural Networks (GNNs) that leverages continuous multiscale dynamics on graphs. Unlike traditional discrete methods that rely on simple diffusion and aggregation, DYMAG employs complex dynamics—drawing on heat, wave, and chaotic equations—to capture deeper topological properties. By approximating continuous dynamics with Chebyshev polynomials of a graph Laplacian, it computes multiscale representations embodying key topological and spectral features without explicit topological computation. We demonstrate DYMAG’s superior performance in recovering random graph parameters, predicting persistent homology on synthetic and citation graphs, and outperforming existing methods in tasks such as protein shape prediction, total polar surface area, aromaticity, and material band gap estimation, underscoring its value in molecular research. |
HiPoNet:A Topology-Preserving Multi-View Neural Network For High Dimensional Point Cloud and Single-Cell Data | Siddharth Viswanath (Yale University)*; Hiren Madhu (Yale University); Dhananjay Bhaskar (Yale University); Jake Kovalic (Yale University ); David Johnson (Boise State University); Rex Ying (Yale University); Christopher Tape (University College London); Ian Adelstein (Yale University); Michael Perlmutter (Boise State University); Smita Krishnaswamy (Yale University) | We propose HiPoNet, an end-to-end differentiable neural network for regression, classification, and representation learning on high dimensional point clouds. Existing point-cloud methods are tailored for 3D data hence struggle with scale and dimensionality of modern single-cell and spatial data. Most current approaches build a single nearest-neighbor graph, discarding important geometric information. In contrast, HiPoNet forms higher-order simplicial complexes through learnable feature reweighting, generating multiple data views that disentangle distinct biological processes. It then employs simplicial wavelet transforms to extract multi-scale features—capturing both local and global topology. We show that HiPoNet preserves topological information in the learned representations, and significantly outperforms state-of-the-art models on single cell and spatial transcriptomics datasets. Overall, HiPoNet offers a robust and scalable solution for high-dimensional data analysis. |
STAGED: A Multi-Agent Neural Network for Learning Cellular Interaction Dynamics | João Felipe Rocha (Yale University)*; Xingzhi Sun (Yale University); Ke Xu (Yale University); Jake Kovalic (Yale University); Ananya Krishna (Yale University); Smita Krishnaswamy (Yale University); Mark Gerstein (Yale University); Andrei Chupakhin (Lomonosov Moscow State University) | The advent of single-cell technology has improved our understanding of cellular states and subpopulations in various tissues under normal and diseased conditions by using data-driven approaches like clustering and trajectory inference. However, these methods treat cells as independent data points. Spatial transcriptomics allows us to model cells as a dynamic system with interactions influencing cell states and communication. While agent-based modeling (ABM) provides a framework, traditional approaches rely on handcrafted rules rather than data. To address this, we introduce Spatial Temporal Agent-Based Graph Evolution Dynamics (STAGED), integrating ABM with deep learning. Using graph neural networks (GNNs) with shared weights per cell type, our model represents genes as vertices and interactions as edges, learning strengths via attention. Trained on time-lapse spatial data as a graph neural ODE (GDE), it captures intercellular and intracellular interactions adaptively. |
Graph Signal Processing: Frequency Analysis for Similar Matrices | John Shi (Carnegie Mellon University)*; Jose Moura (Carnegie Mellon University) | Spectral analysis is a fundamental part of both DSP and GSP. Many signal processing applications rely on performing a spectral analysis by taking the Fourier transform. In both DSP and GSP, the ordering of the frequencies (from low to high) are determined by total variation of the eigenvectors. Additionally, eigenvectors associated with low frequencies have similar values between adjacent or neighboring nodes, while eigenvectors associated with high frequencies have larger oscillations between neighboring nodes. In this paper, we examine frequency analysis of “similar” graphs, i.e., graphs whose adjacency matrices are related by a similarity transformation. Similar matrices have the same eigenvalues, but different eigenvectors and entries. We show that the frequency signal, and frequency ordering is preserved across similar graphs. We examine examples of these similar graphs through the GSP companion model ([1]) that shows that GSP is DSP with appropriate boundary conditions. |
Inferring the Graph Structure of Images for Graph Neural Networks | Mayur Gowda (Carnegie Mellon University); John Shi (Carnegie Mellon University)*; Augusto Santos (Instituto de Telecomunicacoes-IT); Jose Moura (Carnegie Mellon University) | Image datasets such as MNIST are a key benchmark for testing GNN architectures. The images are traditionally represented as a grid graph with each node representing a pixel and edges connecting neighboring pixels (vertically and horizontally). The graph signal is the values (intensities) of each pixel in the image. The graphs are used as input to a GNN to classify the images. In this work, we improve the accuracy of downstream graph neural network tasks by finding alternative graphs to the grid graph to represent the dataset images, following the approach in [4, 5]. We find row and column correlation graphs for each image in MNIST and Fashion-MNIST using correlations between the pixel values using the method in [4, 5]. We form the graph representing the image using the row and column graphs. We show that using these different graph representations as input into Graph CNNs and GAT improve the accuracy over using the traditional grid graph and other graph methods in the literature. |
Spatial Contrastive Pre-Training of Transformer Encoders for sEEG-Based Seizure Onset Zone Detection | Paulo Goncalves (Inria); Pierre Borgnat (ENS de Lyon)*; Zacharie Rodière (ENS de Lyon) | For the clinical study of epilepsy, we develop a transformer encoder for the detection of Seizure Onset Zone (SOZ) from stereo-EEG. It integrates clinically grounded time-frequency features with spatial contrastive pre-training. While prior spatial transformer approaches analyze learned representations, our method uniquely combines: (1) engineered time-frequency representations (TFRs) encoding epileptic spikes and oscillations, and (2) a contrastive objective leveraging anatomical relationships between the electrode contacts that are in the SOZ and the ones outside the SOZ. The model processes heterogeneous sEEG records from different patients, using both ictal and interictal data. This contrastive strategy minimizes representational similarity between contact pairs on either side of the SOZ boundary while maximizing intra-SOZ similarity. Attention heads provide interpretable connectivity patterns, bridging data-driven learning with the study of functional connectivity networks. |