The general purpose of the School Colloquia is to provide accessible and engaging introductions to a broad range of topics of current research interest in any branch of Mathematics or its applications.
The colloquium will be held this semester (starting January 2019) in various locations. Please check this page a few days in advance for full details.
The University website provides detailed directions for reaching the Mile End campus.
A hypergraph with vertex set, say, {1,2,...,n} is a collection of subsets of the vertex set of some fixed size – these subsets are called edges. For example, the subsets might be all triples that form an arithmetic progression. An independent set in the hypergraph is a subset of the vertices that contain no edge – in the example, it would be a set of integers containing no 3-term arithmetic progression. It has recently been discovered that the independent sets in any hypergraph must be structured in some way: they are all contained within one of a small collection of "independent-like" subsets. We shall discuss this discovery and its applications.
The theory of homogeneous flows provides a powerful mathematical toolkit that has recently contributed to the solution of a number of longstanding problems. I will survey some of the recent progress, including a study of distances in multi-loop networks (circulant graphs), the coin exchange problem and the derivation of a new kinetic equation which captures surprising transport phenomena in crystals and quasicrystals. This lecture is aimed at a broad audience.
I'll give an overview of some of the major open problems in general relativity and the progress that mathematicians have made in recent years towards their solution.
Critical phenomena in physics are supposed to involve some kind of scale invariance. The Thompson groups are composed of local scaling transformations so one might expect them to be relevant to critical phenomena on lattices. Motivated by this thought we will construct a family of unitary representations of the Thompson groups whose coefficients are of some interest.
Due to its use in cryptographic protocols such as the Diffie-Hellman key exchange, the discrete logarithm problem attracted a considerable amount of attention in the past 40 years. In this talk, we summarise the key technical ideas and their evolution for the case of discrete logarithms in small characteristic finite fields. This road leads from the original belief that this problem was hard enough for cryptographic purpose to the current state of the art where the algorithms are so efficient and practical that the problem can no longer be considered for cryptographic use.
The theme of this talk is the dichotomy between amenability and non-amenability. Because the group of motions of the three-dimensional Euclidean space is non-amenable (as a group with the discrete topology), we have the Banach-Tarski paradox. In dimension two, the group of motions is amenable and there is therefore no paradoxical decomposition of the disk. This dichotomy is most apparent in the theory of von Neumann algebras: the amenable ones are completely classified by the work of Connes and Haagerup, while the non-amenable ones give rise to amazing rigidity theorems, especially within Sorin Popa's deformation/rigidity theory. I will illustrate the gap between amenability and non-amenability for von Neumann algebras associated with countable groups, with locally compact groups, and with group actions on probability spaces.
With the move to more renewable sources of electricity, three things are becoming necessary:
I will give an overview of some mathematical work on these. Firstly, with Lisa Flatley and Mike Waterson, we have designed an algorithm for optimal use of an energy store; this can be used to maximise profit from variations in the price or to minimise variations in the mismatch between supply and demand. Secondly, with Ellen Webborn, we have analysed the effect of making thermostatically controlled loads frequency-sensitive; this can reduce variations in the mismatch between supply and demand but there are dangers that large uptake of frequency-sensitive technology could lead to instabilities; in our model the uniform distribution of phases appears to be stable and the fully synchronised one is unstable. Thirdly, I am developing a Gaussian process method to detect oscillations in power flow from phasor measurement unit data, and estimate their frequency, damping rate and mode shape; it is essential to detect these modes early enough to design control for them.
The moving sofa problem is a well-known open problem in geometry. It asks for the planar shape of largest area that can be moved around a right-angled corner in a two-dimensional hallway of width 1. Although deceptively easy to state, it turns out to be highly nontrivial to analyze, and has a rich structure that is intriguing to amateurs and experts alike. In this talk I will survey the known results about the problem, including a new moving sofa shape with an interesting algebraic structure, and new bounds on the area of a moving sofa I derived recently in collaboration with Yoav Kallus.
We discuss several problems of modern physics requiring probabilistic ideas and techniques, mostly in the frameworks of spectral analysis of differential and finite-difference self-adjoint operators with random coefficients and hermitian random matrices of large size. We give an outline of certain basic results (selfaveraging, ergodic opertors, dense point spectrum, spectral rigidity and universality, etc.) and discuss their spectral, probabilistic and physical content, including results that appeared recently.
It is easy to tile the plane with dominoes (2x1 rectangles). It is also easy to tile the plane with trominoes (three 1x1 squares forming an L-shape). Of course, not every shape tiles the plane. What happens when we increase the number of dimensions from two to three, or beyond?
We shall discuss some recent progress regarding a conjecture of Godbersen on mixed volumes, and other measures of symmetry, for which the simplex is conjectured, or proved, to be extremal.
Markoff triples are integer solutions of the equation $x^2+y^2+z^2=3xyz$ which arose in Markoff's spectacular and fundamental work (1879) on diophantine approximation and has been henceforth ubiquitous in a tremendous variety of different fields in mathematics and beyond. After reviewing some of these (Product Replacement Algorithm; algebraic solutions of Painleve VI equations) we will present recent joint work with Bourgain and Sarnak on the connectedness of the set of solutions of the Markoff equation modulo a prime $p$ under the action of the group generated by Vieta involutions. Similar results for composite moduli enable us to establish certain new arithmetic properties of Markoff numbers, for instance, the fact that almost all of them are composite.
Some physical and mathematical theories have the unfortunate feature that if one takes them at face value, many quantities of interest appear to be infinite! Various techniques, usually going under the common name of “renormalisation” have been developed over the years to address this, allowing mathematicians and physicists to tame these infinities. We will tip our toes into some of the mathematical aspects of these techniques and we will see how they have recently been used to make precise analytical statements about the solutions of some equations whose meaning was not even clear until now.
Convex domains play a central role in many areas of analysis and mathematical computer science. During the last 15 years considerable progress has been made in understanding the distribution of mass within such domains in high dimensions. The most telling point is that such domains exhibit characteristics that we normally associate with product probability measures: the joint laws of independent random variables. The talk will survey some of the progress, explain the main open problems and show how our intuition can be badly misleading when we try to think about high-dimensional objects.
Surfaces with a (perhaps singular) flat Euclidean metric play a prominent role in Teichmueller theory and dynamics, and are connected to classical questions in billiards. More recently, this field has been connected to six-dimensional symplectic topology and to the homological algebra study of algebras defined by quivers with potential. We will review this circle of ideas, largely reporting on joint work with Tom Bridgeland.
There has been massive efforts to understand the parameter spaces of convex polytopes -- and great results such as the “g-Theorem” on the way. On the other hand, key questions are still open, such as the “fatness problem” for 4-dimensional polytopes, and the related question whether we expect the same answers for convex polytopes (discrete-geometric objects) and for cellular spheres (a topological model). I will argue that we should not, and give some evidence for this.
Everybody has heard of the Faraday cage effect, in which a wire mesh does a good job of blocking electric fields. Surely the mathematics of such a famous and useful phenomenon has been long ago worked out and written up in the textbooks? It seems to be not so. One reason may be that that the effect is not as simple as one might expect: it depends on the wires having finite radius. Nor is it as strong as one might imagine: the shielding improves only linearly as the wire spacing decreases. This talk will present results by Jon Chapman, Dave Hewett and myself including (a) numerical simulations, (b) a theorem proved by conformal mapping, (c) a continuous model derived by multiple scales analysis, (d) a discrete model derived by energy minimization, (e) a connection with the periodic trapezoidal rule for analytic integrands, and (f) a physical explanation.
The classical Apollonian circle packing is constructed by taking three mutually tangent circles in the unit plane and repeatedly inscribing circles in the (curved) triangles between them. This results in infinitely many circles, whose radii tend to zero. Furthermore, there is a remarkably simple asymptotic formula due to Kontorovich and Oh for the number of these circles with radii > r, as r tends to infinity. This talk will describe this and related results.
The past few years have seen the emergence of a new research branch at the interface of combinatorics, algebra and geometry: "Fuss-Catalan combinatorics," as some people call it. It has been observed that there are several sets of objects associated to reflection groups, such as the (generalised) non-crossing partitions of Armstrong, the generalised cluster complex of Fomin and Reading, certain regions in the extended Shi arrangement, the geometric chains of Athanasiadis, etc., which are all enumerated by the (Fu\ss)-Catalan numbers associated with the underlying reflection group. Why this is so is only partially understood. I shall give an introduction into this fascinating subject of algebraic combinatorics, not assuming any prerequisites.
Understanding the statistical distribution of the primes is one of the outstanding problems in mathematics. It is, for example, the origin of the Riemann Hypothesis. The question of how many primes typically lie in an interval of a given size goes back to Gauss. It is the subject of an important conjecture due to Goldston and Montgomery. I will review the history of this problem and then explain a remarkable analogy between the primes and certain polynomials. In the case of polynomials, the analogue of Goldston-Montgomery conjecture can be proved using ideas from mathematical physics relating to random matrices.
Twenty years ago, Wiles and Taylor-Wiles proved the modularity of semistable elliptic curves over the rational numbers. Subsequently this was extended by Breuil-Conrad-Diamond-Taylor to prove the modularity of all elliptic curves over the rational numbers. I will review what this means, and explain some of the progress since then on extending these results to more general number fields.
The eye is a complex organ and, as such, represents a rich source of fascinating problems for applied mathematicians, interested in understanding its anatomy and physiology and how these change during ageing and in disease. In this talk attention will focus on photoreceptors, light-sensing retinal cells whose length fluctuates on a daily basis. I will start by presenting a simple mathematical model, formulated as a free boundary problem, which can be used to determine whether the observed fluctuations in healthy photoreceptors may be attributed to changes in oxygen demand during periods of light and dark. I will then focus on retinitis pigmentosa, a degenerative disease that targets the photoreceptors and causes progressive loss of visual function. I will present a second mathematical model developed in order to determine whether hyperoxia, exposure to elevated oxygen levels, may be responsible for the patterns of photoreceptor degeneration associated with retinitis pigmentosa. If time permits, I will also explain how simple continuum models may be combined with experimental data on corneal angiogenesis in order to compare the mechanisms by which different angiogenic factors act.
Stochastic approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. Among many algorithms in machine learning, reinforcement learning algorithms such as TD- and Q-learning are two of its most famous applications. This talk will provide an overview of stochastic approximation, with focus on optimizing the rate of convergence. Based on this general theory, the well known slow convergence of Q-learning is explained: the CLT variance of the algorithm is typically infinite. Three new Q-learning algorithms are introduced to dramatically improve performance:
(i) The Zap Q-learning algorithm that has provably optimal asymptotic variance, and resembles the Newton-Raphson method
(ii) The PolSA algorithm that is based on Polyak's momentum technique, but with a specialized matrix momentum, and
(iii) The NeSA algorithm based on Nesterov's acceleration technique
Analysis of (ii) and (iii) require entirely new analytic techniques. One approach is via coupling: conditions are established under which the parameter estimates obtained using the PolSA algorithm couple with those obtained using the Newton-Raphson based algorithm. Numerical examples confirm this behavior, and the remarkable performance of these algorithms.
In this talk I will give an introduction to Design Theory from the combinatorial perspective of (hyper)graph decompositions. I will survey some recent progress on the existence of decompositions, with particular attention to triangle decompositions of graphs, which provide a simple (yet still interesting) illustration of more general results.