Mathematical Foundation of AI Research Centre
19/11/2021 11:00 PMMB 503Michael FarberMathematical Foundations of AI - what it is about?
19/11/2021 11:30 AMMB-503 and ZoomPrimoz Skraba (QMUL)Wasserstein Stability of Persistence and Applications.
10/12/2021 11:00 AMMB-503Matthew Thorpe (University of Manchester)A Random Walk from Semi Supervised Learning to Neural Networks
Semi-supervised learning is the problem of finding missing labels; more precisely one has a data set of feature vectors of which a (often small) subset are labelled. The semi-supervised learning assumption is that similar feature vectors should have similar labels which implies one needs a geometry on the set of feature vectors. A typical way to represent this geometry is via a graph where the nodes are the feature vectors and the edges are weighted by some measure of similarity. Laplace learning is a popular graph-based method for solving the semi-supervised learning problem which essentially requires one to minimise a Dirichlet energy defined on the graph (hence the Euler-Lagrange equation is Laplace's equation). However, at low labelling rates Laplace learning typically performs poorly. This is due to the lack of regularity, or the ill-posedness, of solutions to Laplace's equation in any dimension higher (or equal to) two. The random walk interpretation allows one to characterise how close one is to entering the ill-posed regime. In particular, it allows one to give a lower bound on the number of labels required and even provides a route for correcting the bias. Correcting the bias leads to a new method, called Poisson learning. Finally, the ideas behind correcting the bias in Laplace learning have motivated a new graph neural network architecture which do not suffer from the over-smoothing phenomena. In particular, this type of neural network, which we call GRAND++ (GRAph Neural Diffusion with a source term) enables one to employ deep architectures.
21/01/2022 11:00 AMZoomOmer Bobrowski (QMUL)Topological Phase Transitions in Random Geometric Complexes
Connectivity and percolation are two well studied phenomena in random graphs.
In this talk we will discuss higher-dimensional analogues of connectivity and percolation that occur in random simplicial complexes.
Simplicial complexes are a natural generalization of graphs that consist of vertices, edges, triangles, tetrahedra, and higher dimensional simplexes.
We will mainly focus on random geometric complexes. These complexes are generated by taking the vertices to be a random point process, and adding simplexes according to their geometric configuration.
Our generalized notions of connectivity and percolation use the language of homology - an algebraic-topological structure representing cycles of different dimensions.
In this talk we will discuss recent results analyzing phase transitions related to these topological phenomena.