7–11 Apr 2025
Lecture and Conference Centre
Europe/Warsaw timezone

Generalisation Error for Semi-Supervised Learning Using Graph Neural Networks

8 Apr 2025, 14:40
20m
Room 7

Room 7

Speaker

Nil Ayday

Description

Graph Neural Networks (GNNs) have become powerful tools for modeling complex relationships in graph-structured data across various domains. The success of GNNs comes from their graph convolution process, which allows information to propagate through the graph structure, so that each node can aggregate information from its neighbours. This process uses graph information (the connections between nodes) and node features (attributes specific to each node) to create representations that can be used for various tasks. In this presentation, we investigate how much of the information provided by the graph and the node features contribute to the prediction of GNNs in a semi-supervised learning setting. We derive the exact generalization error for linear GNNs under a theoretical framework, where node features and the graph convolution are partial spectral observations of the underlying data. We investigate the generalization error to evaluate the learning capabilities of Graph Convolutional Networks (GCNs), a specific type of GNN that uses graph convolution operations. A key insight from our analysis is that GCNs fail to utilize graph and feature information when graph and feature information are not aligned. We conclude with ongoing work on extending our analysis to other state-of-the-art GNNs and graph attention mechanisms. Our goal is to develop an architecture that better exploits graph and feature information.

Co-authors

Presentation materials

There are no materials yet.