Speakers & Sessions
Our plenary speakers are:
- Michael Bronstein (Oxford)
- Gitta Kutyniok (LMU)
- Dan Spielman (Yale)
- Soledad Villar (Johns Hopkins)
- Samory Kpotufe (Columbia University)
- Mark Iwen (Michigan State University)
- Holger Rauhut (Aachen University)
- Coralia Cartis (Oxford)
- Rebecca Willett (University of Chicago)
Our special sessions will bring together domain experts to present on topics including Manifold Learning from Data, Graph Signal Processing, Topological Data Analysis, and Sampling Theory for Neural Activity Data, among others. Each session will feature 3-4 speakers, and run for 2 hours.
Manifold Learning from Data
Hosted by Kevin Moon (Utah State).
Manifold learning algorithms are powerful data analysis methods that are based on the manifold assumption. Informally, the manifold assumption states that high dimensional data are approximated well by a small number of locally smooth dimensions. This special session focuses on recent manifold learning algorithms or recent mathematical insights obtained about existing manifold learning algorithms. Applications of interest include, but are not limited to, dimensionality reduction, data denoising, data visualization, data integration, density estimation, clustering, and semi-supervised learning.
- Jeff Calder (University of Minnesota)
- Ronald Coifman (Yale)
- Roy Lederman (Yale)
- Dan Kushnir (Nokia Bell Labs)
- Jake Rhodes (Idaho State University)
Signal and Image Processing
This symposium will delve into the realm of graph-based signal and image processing. Experts from this field will come together to share their latest research findings, discuss current challenges and opportunities, and network with their peers. The event will cover a range of topics including signal and image processing techniques, sampling and reconstruction methods, graph neural networks, and their applications. With a focus on both theory and practical applications, this symposium promises to provide valuable insights and knowledge for those interested in these cutting-edge and rapidly growing areas.
This session will feature:
- Selin Aviyente (Michigan State University): Low-rank and Smooth Tensor Recovery on Cartesian Product Graphs
- Kiryung Lee (Ohio State): Stability Analysis of Resolving Pulses of Unknown Shape from Compressive Fourier Measurements
- Gal Mishne (UCSD): Graph Laplacian Learning with Exponential Family Noise
- Jing Qin (University of Kentucky): Fast Dual-Graph Regularized Background Foreground Separation
Methods for Low Rank Matrices and Tensors
- Lijun Ding (UW Madison)
- Lara Kassab (UCLA)
- Palina Salanevich (University of Utrecht)
- Yizhe Zhu (UC Irvine)
Data Geometry and Optimization
Prior work in optimization has focused on the function that is being optimized. However, in recent years, there has been a shift towards looking at the geometry of the data as well. By appropriately incorporating the geometry of the data, we can get better convergence, change the local optima that we converge to, or better understand the properties of the optima that we converge to. In this mini-symposia, we invite speakers to talk about recent advances in optimization that incorporate the geometry of the data to solve or understand the optimization problem.
This sessions will feature:
Randomized Algorithms for Complex Data
Hosted by Elizaveta Rebrova (Princeton).
Over recent years, randomized methods led to breakthroughs in many areas, including large-scale optimization, numerical linear algebra, scientific computing, and machine learning. However, the interest in developing efficient, interpretable, and theory-supported randomized algorithms is only growing, partially motivated by the emergence of large-scale information-rich data — such as high-dimensional, multi-modal (tensor), or graph data. This SampTA section brings together the researchers creating and analyzing modern randomized algorithms for diverse problems involving large-scale complex data.
Machine Learning and Signal Processing on Graphs and Manifolds
Hosted by Michael Perlmutter (Boise State University).
Recent years have seen rapid growth in the development of data science and machine learning techniques for graph- and manifold-structured data. These techniques typically aim to find and utilize the (often hidden) intrinsic geometric structure of the data set. This session will feature several examples of such techniques which are based on methods initially developed in the context of graph signal processing.
Sampling Theory in Neuroscience
- Scaling behavior in big-data neuroscience towards precision medicine, Danilo Bzdok (McGill)
- Half-Hop: A graph upsampling approach for improved message passing, Eva Dyer (Georgia Tech)
- Quantifying and comparing neural manifold structure with limited samples, Alex Williams (NYU & Flatiron Institute)
- Modeling brain-wide neural recordings to extract behaviorally-relevant structure, Matthew Perich (UdeM)
- Meta-Learning to leverage population data for personalized neurostimulation optimization, Guillaume Lajoie (UdeM)
Quantization in Signal Processing and Data Science
A common challenge in signal processing and data science is that digital processing requires to quantize all involved quantities to a finite alphabet. Whereas many numerical methods with robust guarantees remain reliable under such perturbations if the quantization alphabet is sufficiently large, in modern large-scale applications it is often necessary to apply extremely coarse quantization. This setting requires tailored algorithms and novel mathematical methods to analyze their performance. In this special session we will bring together experts from applied mathematics and engineering to discuss recent advances in the theory of quantization.
This session will feature four invited talks:
- Near-optimality of \(\Sigma\Delta\) quantization for \(L^2\)-approximation with polynomials in Bernstein form (Sinan Gunturk, Courant Institute)
- Learning Signal Spaces from Incomplete Binary Measurements (Laurent Jacques, UC Louvain)
- Algorithms and Theory for Quantizing Neural Networks (Rayan Saab, UCSD)
- Digital Halftoning via Mixed-Order Weighted Sigma-Delta Modulation (Hanna Veselovska, Technical University of Munich)
Event-driven sampling, time-encoding and sampling with integral transforms
The focal point of the session is event-driven sampling and time-encoding machines. These sampling schemes are motivated by the need for energy efficiency and rely on capturing the times where significant events of a signal occur, rather than following a clock-based pattern. Sampling events are often modeled by level sets of certain integral transforms and lead to non-uniform sampling geometries. The session also seeks to highlight technical parallelisms between event-driven sampling on the one hand, and the theory of non-uniform sampling supplemented with derivatives and sampling strategies adapted to varying bandwidth, on the other.
This special session will feature five invited talks:
- Pseudo-inversion of integration-based time encoding using POCS (Nguyen T. Thao, Dominik Rzepka, Marek Miskowicz)
- Iterative reconstruction of bandlimited signals from nonuniform samples by sliding periodization of nonuniformity (Nguyen T. Thao, Dominik Rzepka, Marek Miskowicz)
- Model-Driven Quantization for Time Encoding Machines (Dorian Florescu)
- Sampling theorems with derivatives in shift-invariant spaces generated
by exponential B-splines (Irina Shafkulovska, Karlheinz Gröchenig)
- Sampling theorems in spaces of variable bandwidth generated via Wilson basis (Beatrice Andreolli, Karlheinz Gröchenig)