7–11 Apr 2025
Lecture and Conference Centre
Europe/Warsaw timezone

Convergence of gradient based training for linear Graph Neural Networks

Speaker

Dhiraj Patel

Description

Graph Neural Networks (GNNs) are powerful tools for addressing learning problems on graph structures, with a wide range of applications in molecular biology and social networks. Nonetheless, the theoretical principles underlying their empirical performance are not well understood. This work provides a convergence analysis of gradient dynamics in linear GNN training. We show that the gradient flow training of a linear GNN with mean squared loss converges to the global minimum at an exponential rate, with the convergence rate explicitly depending on the initial weights and the graph shift operator (graph aggregation matrix). Moreover, for balanced initialization, gradient flow training achieves the global minimum of the mean squared loss while minimizing the total weights of the network. In addition to gradient flow, we analyze the convergence of linear GNNs under gradient descent training, interpreted as a discrete approximation of gradient flow with a fixed step size. Finally, we validate our findings on synthetic datasets from well-known graph models and real-world datasets.

Co-authors

Presentation materials

There are no materials yet.