Speaker
Description
We introduce a class of processing architectures for node features on a graph, that are equivariant with respect to a local G-action, where G is a group. A local group action acts on feature vectors at graph nodes via group elements, which need not be equal for all graph nodes. Local symmetries are already considered in several works in the context of data science, ranging from color image processing to equivariant neural networks. Locally equivariant processing architectures provide an attractive direction for research because they promise to produce architectures robust to local perturbations of inputs. Our contributions for the design of locally equivariant processing architectures are the following:
- We extend existing approaches that focused on particular groups and low dimensional features, to general compact connected Lie groups and high dimensional feature vectors.
- We provide a theoretically sound and generalizable framework for the design of such architectures.
- We propose consistent parametrization of our architectures, amenable to machine learning tasks.
Our devised architectures arise from differential operators acting on sections of specific vector bundles. Based on generalized Laplacian operators, we construct a bundle scale space representation of input data, akin to scale space methods in PDE image processing. These bundle scale space representations extend classical Gaussian scale-space representations in two ways. First, they inherently respect local symmetries. Second, the generalized Laplacians -- that generate the bundle scale spaces -- possess non-trivial null-spaces, which enhances expressivity. The generalized Laplacians that induce our novel architectures are parametrized by geometric quantities (Riemannian metrics and connection 1-forms) associated to the graph. By referring to methods from lattice gauge theory and vector diffusion maps we describe how to extract implementable discretizations of bundle scale-space representations, while preserving equivariance under local group actions. Furthermore, we show how to parametrize our model efficiently, resulting in architectures similar to message passing networks on graphs. This enables us to extend the usefulness of our methods via machine learning.