Plenary Speakers

Jennifer Neville
Prof. of Computer Science and Statistics
Purdue University
USA 
Towards Relational AI  the good, the bad, and the ugly of learning over networks
In the last 20 years, there has been a great deal of research on machine learning methods for graphs, networks, and other types of relational data. By moving beyond the independence assumptions of more traditional ML methods, relational models are now able to successfully exploit the additional information that is often observed in relationships among entities. Specifically, network models are able to use relational information to improve predictions about user interests, behavior, and interactions, particularly when individual data is sparse. The tradeoff however, is that the heterogeneity, partialobservability, and interdependence of largescale network data can make it difficult to develop efficient and unbiased methods, due to several algorithmic and statistical challenges. In this talk, I will discuss these issues while surveying several general approaches used for relational learning in largescale social and information networks. In addition, to reflect on the movement toward pervasive use of the models in personalized online systems, I will discuss potential implications for privacy, polarization of communities, and spread of misinformation.
You can access the slides here.
More info about the speaker here.

Naoki Saito
Mathematics
University of California, Davis
USA 
The First Steps toward Building Natural Graph Wavelets
For the development and theory of discrete wavelets on regular lattices in R^d, the Fourier series and transforms have played a significant role. Hence, when attempting to develop wavelet theory naturally tailored for graphs and networks, some researchers have used graph Laplacian eigenvalues and eigenvectors in place of the frequencies and complex exponentials, respectively. While tempting to do so, there are several fundamental problems in this viewpoint. One of them is the intricate relationship between the frequencies and the Laplacian eigenvalues. For undirected and unweighted paths (or cycles), the Laplacian eigenvectors are the discrete cosine (or Fourier) basis vectors and the corresponding eigenvalues are square of their frequencies. Consequently on those simple graphs, one can precisely develop the classical wavelets using the LittlewoodPaley theory. However, as soon as a graph becomes even slightly more complicated (e.g., a discretized thin rectangle in 2D), the situation completely changes: we cannot view the eigenvalues as a simple monotonic function of frequency anymore. Hence, the first step toward building natural graph wavelets is how to sort and organize Laplacian eigenfunctions without using the eigenvalues and to create a dual domain graph. In this talk, I will discuss this important problem further and explain my effort using Earth Mover's/Wasserstein Distance to measure natural distances between eigenfunctions followed by embedding the resulting distance matrix into an appropriate Euclidean domain. Then, I will discuss how to combine the "close" eigenfunctions to form natural graph wavelet frames and bases that contain localized basis functions at various nodes of the input graph. This latter part is joint work with Haotian Li (UCD) and Alex Cloninger (UCSD).
You can access the slides here.
More info about the speaker here.

Gonzalo Mateos
Electrical and Computer Engineering
University of Rochester
USA 
Digraph Signal Processing: Orthonormal Transforms and Network Inference
We discuss the problem of constructing a graph Fourier transform (GFT) for directed graphs (digraphs), which decomposes graph signals into different modes of variation with respect to the underlying network. Accordingly, to capture low, medium, and high frequencies we seek a digraph (D)GFT such that the orthonormal frequency components are as spread as possible in the graph spectral domain. To that end, we advocate a twostep design whereby we 1) find the maximum directed variation (i.e., a novel notion of frequency on a digraph) a candidate basis vector can attain and 2) minimize a smooth spectral dispersion function over the achievable frequency range to obtain the desired spread DGFT basis. Both steps involve nonconvex, orthonormalityconstrained optimization problems, which are tackled via a feasible optimization method on the Stiefel manifold that provably converges to a stationary solution. We also briefly discuss a dataadaptive variant whereby a sparsifying orthonormal transform is learnt to also yield parsimonious representations of bandlimited signals. Graph frequency analyses requires a specification of the underlying digraph. In the second part of this talk, we consider inferring a directed network from nodal observations of graph signals generated by linear diffusion dynamics on the sought graph. Observations are modeled as the outputs of a linear graph filter (i.e., a polynomial on a local diffusion graphshift operator encoding the unknown graph topology), excited with an ensemble of independent graph signals with arbitrarilycorrelated nodal components. In this context, we first rely on observations of the output signals along with prior statistical information on the inputs to identify the diffusion filter. Such problem entails solving a system of quadratic matrix equations, which we recast as a smooth quadratic minimization subject to Stiefel manifold constraints. Subsequent identification of the network topology given the graph filter estimate boils down to finding a sparse and structurally admissible shift that commutes with the given filter, thus forcing the latter to be a polynomial in the sought graphshift operator. We illustrate the effectiveness of our DGFT and topology inference algorithms through numerical tests on synthetic and realworld networks.
You can access the slides here.
More info about the speaker here.

Gal Mishne
Applied Math
Yale University
USA 
Alejandro Ribeiro
Electrical and Systems Engineering
University of Pennsylvania
USA