Time: Tuesday at 11:00
Location: MB-503, Maths Building
Organiser: Dr Weini Huang
Given a compact surface, we consider the set of area-preserving flows with isolated fixed points. The study of these flows dates back to Novikov in the 80s and since then many properties have been investigated. Starting from an overview of the known results, we show that typical such flows admitting several minimal components are mixing when restricted to each minimal component and we provide an estimate on the decay of correlations for smooth observables.
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and coloured noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified coloured noise approximation, developed by P. Jung and P. Hänggi. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
The hydrodynamic approximation is an extremely powerful tool to describe the behavior of many-body systems such as gases. At the Euler scale (that is, when variations of densities and currents occur only on large space-time scales), the approximation is based on the idea of local thermodynamic equilibrium: locally, within fluid cells, the system is in a Galilean or relativistic boost of a Gibbs equilibrium state. This is expected to arise in conventional gases thanks to ergodicity and Gibbs thermalization, which in the quantum case is embodied by the eigenstate thermalization hypothesis. However, integrable systems are well known not to thermalize in the standard fashion. The presence of infinitely-many conservation laws preclude Gibbs thermalization, and instead generalized Gibbs ensembles emerge. In this talk I will introduce the associated theory of generalized hydrodynamics (GHD), which applies the hydrodynamic ideas to systems with infinitely-many conservation laws. It describes the dynamics from inhomogeneous states and in inhomogeneous force fields, and is valid both for quantum systems such as experimentally realized one-dimensional interacting Bose gases and quantum Heisenberg chains, and classical ones such as soliton gases and classical field theory. I will give an overview of what GHD is, how its main equations are derived and its relation to quantum and classical integrable systems. If time permits I will touch on the geometry that lies at its core, how it reproduces the effects seen in the famous quantum Newton cradle experiment, and how it leads to exact results in transport problems such as Drude weights and non-equilibrium currents.
This is based on various collaborations with Alvise Bastianello, Olalla Castro Alvaredo, Jean-Sébastien Caux, Jérôme Dubail, Robert Konik, Herbert Spohn, Gerard Watts and my student Takato Yoshimura, and strongly inspired by previous collaborations with Denis Bernard, M. Joe Bhaseen, Andrew Lucas and Koenraad Schalm.
Gibbs measures are a useful class of invariant measures for hyperbolic systems, of which the best known is the natural Sinai-Ruelle-Bowen measure. It is a standard fact that the volume measure on a small piece of unstable manifold can be pushed forward under the map (or flow) and in the limit converges to the Sinai-Ruelle-Bowen measure. Pesin asked the question: How can this construction be adapted to give other Gibbs measures? In this talk we will describe one solution.
Large dynamical fluctuations - atypical realizations of the dynamics sustained over long periods of time - can play a fundamental role in determining the properties of collective behavior of both classical and quantum non-equilibrium systems. Rare dynamical fluctuations, however, occur with a probability that often decays exponentially in their time extent, thus making them difficult to be directly observed and exploited in experiments. In this talk I will explain, using methods from dynamical large deviations, how rare dynamics of a given (Markovian) open quantum system can always be obtained from the typical realizations of an alternative (also Markovian) system. The correspondence between these two sets of realizations can be used to engineer and control open quantum systems with a desired statistics “on demand”. I will illustrate these ideas by studying the photon-emission behaviour of a three-qubit system which displays a sharp dynamical crossover between active and inactive dynamical phases.
In [1] Émile Le Page established the Holder continuity of the top Lyapynov exponent for irreducible random linear cocycles with a gap between its first and second Lyapunov exponents. An example of B. Halperin (see Appendix 3 in [2]) suggests that in general, uniformly hyperbolic cocycles apart, this is the best regularity that one can hope for. We will survey on recent results and limitations on the regularity of the Lyapunov exponents for random GL(2)-cocycles.
[1] Émile Le Page, Régularité du plus grand exposant caractéristique des produits de matrices aléatoires indépendantes et applications. Ann. Inst. H. Poincaré Probab. Statist. 25 (1989), no. 2, 109–142.
[2] Barry Simon and Michael Taylor, Harmonic analysis on SL(2,R) and smoothness of the density of states in the one-dimensional Anderson model. Comm. Math. Phys. 101 (1985), no. 1, 1–19.
Random matrices play a crucial role in various fields of mathematics and physics. In particular in the field of quantum chaos Hermitian random matrix ensembles represent universality classes for spectral features of Hamiltonians with classically chaotic counterparts. In recent years the study of non-Hermitian but PT-symmetric quantum systems has attracted a lot of attention. These are non-Hermitian systems that have an anti-unitary symmetry, which is often interpreted as a balance of loss and gain in a system.
In this talk the question of whether and how the standard ensembles of Hermitian quantum mechanics can be modified to yield PT-symmetric counterparts is addressed. In particular it is argued that using split-complex and split-quaternionic numbers two new PT-symmetric random matrix ensembles can be constructed. These matrices have either real or complex conjugate eigenvalues, the statistical features of which are analysed for 2 × 2 matrices.
Nonequilibrium steady states for two classes of Hamiltonian models with different local dynamics are discussed. Models in the first class have chaotic dynamics. An easy-to-compute algorithm that goes from micro-dynamics to macro-profiles such as energy is proposed, and issues such as memory, finite-size effects and their relation to geometry are discussed. Models in the second class have integrable dynamics. They become ergodic when driven at the boundary, but continue to exhibit anomolous behavior such as non-Gibbsian local distributions. The results follow from a mixture of numerical and theoretical considerations, some of which are rigorous. They are in joint works with J-P Eckmann, P Balint and K Lin.
Reaction-diffusion processes1 have been widely used to study dynamical processes in epidemics2,3,4 and ecology5 in networked metapopulations. In the context of epidemics6, reaction processes are understood as contagions within each subpopulation (patch), while diffusion represents the mobility of individuals between patches. Recently, the characteristics of human mobility7, such as its recurrent nature, have been proven crucial to understand the phase transition to endemic epidemic states8,9. Here, by developing a framework able to cope with the elementary epidemic processes, the spatial distribution of populations and the commuting mobility patterns, we discover three different critical regimes of the epidemic incidence as a function of these parameters. Interestingly, we reveal a regime of the reaction–diffussion process in which, counter-intuitively, mobility is detrimental to the spread of disease. We analytically determine the precise conditions for the emergence of any of the three possible critical regimes in real and synthetic networks. Joint work with J. Gómez-Gardeñes and D. Soriano-Paños.
Many-body systems involving long-range interactions, such as self-gravitating particles or unscreened plasmas, give rise to equilibrium and nonequilibrium properties that are not seen in short-range systems. One such property is that long-range systems can have a negative heat capacities, which implies that these systems cool down by absorbing energy. This talk will discuss the origin of this unusual property, as well as some of its connections with phase transitions, metastability, and the nonequivalence of statistical ensembles. It will be seen that the essential difference between long- and short-range systems is that the entropy can be nonconcave as a function of the energy for long-range systems. For short-range systems, the entropy is always concave.
The question of deciding whether a given function is injective is important in a number of applications. For example, where the function defines a vector field, injectivity is sufficient to rule out multiple fixed points of the associated flow. One useful approach is to associate sets of matrices/ generalised graphs with a function, and make claims about injectivity based on (finite) computations on these matrices or graphs. For a large class of functions, a novel way of doing this will be presented. Well-known results on functions with signed Jacobian, and more recent results in chemical reaction network theory, are both special cases of the approach presented. However the technique does not provide a unique way of associating matrices/graphs with functions, leading to some interesting open questions.
Nonlinear media host a wide variety of localized coherent structures (bright and dark solitons, vortices, aggregates, spirals, etc.) with complex intrinsic properties and interactions. In many situations such as optical communications, condensed matter waves and biochemical aggregates it is crucial to study the interaction dynamics of coherent structures arranged in periodic lattices. In this talk I will present results concerning chains and lattices of coherent structures and their dynamical reductions from PDEs to ODEs, and all the way down to discrete maps. Particular attention will be given to a) spatially localized vibrations (breathers) in 1D chains of coupled bright solitons and b) vortex lattices dynamics and their crystalline configurations.
I will centre my talk mainly on the general theme of control and synchronization, and on how the problem of multiple current-reversals in ratchets could be translated into that of achieving asymptotic stability and tracking of its dynamical and transport properties. Current-reversal is an intriguing phenomenon that has been central to recent experimental and theoretical investigations of transport based on the ratchet mechanism. Research in this domain is largely motivated by applications to a variety of systems such as asymmetric crystals, semiconductor surfaces under light radiation, vortices in Josephson junction arrays, micro-fluidic channels, transport of ion channels and muscle contraction. Here, by considering a system of two interacting ratchets, we will demonstrate how the interaction can be used to control the reversals. In particular, we will show that current reversal that exists in a single driven ratchet system can ultimately be eliminated in the presence of a second ratchet and then establish a connection between the underlying dynamics and reversal-free regime. The conditions for current-reversal-free transport will be given. Furthermore, we will discuss briefly some applications of our results, recent challenges and possible direction for future works.
We compute the distribution of the partition functions for a class of one-dimensional Random Energy Models (REM) with logarithmically correlated random potential, above and at the glass transition temperature. The random potential sequences represent various versions of the 1/f noise generated by sampling the two-dimensional Gaussian Free Field (2dGFF) along various planar curves. The method is based on an analytical continuation of the Selberg integral from positive integers to the complex plane. In particular, we unveil a duality relation satisfied by the suitable generating function of free energy cumulants in the high-temperature phase. It reinforces the freezing scenario hypothesis for that generating function, from which we derive the distribution of extrema for the 2dGFF on the [0,1] interval and unit circle. If time permits, the relation to the velocity statistics in decaying Burgers turbulence and to the distribution of length of curves in Liouville quantum gravity will be shortly discussed. The results reported are obtained in collaboration with J.-P. Bouchaud, P. Le Doussal, and A. Rosso.
Correlation functions or factorial moments are important characteristics of spatial point process. The question under consideration is to what extend the first two correlation functions identify the point processes. This is a non-linear infinite dimension version of the classical truncated moment problem. In collaboration with J. Lebowitz and E. Speer we derived general conditions, giving rise also to a new approach to moment problems and obtain more concrete results in particular situation.
It is a rather common belief that the only probability distribution occurring in the statistical physics of many-particle systems is that of Boltzmann and Gibbs (BG). This point of view is too limited. The BG-distribution, when seen as a function of parameters such as the inverse temperature and the chemical potential, is a member of the exponential family. This observation is important to understand the structure of statistical mechanics and its connection with thermodynamics. It also is the starting point of the generalizations discussed below. Recently, the notion of a generalized exponential family has been introduced, both in the mathematics and in the physics literature. A sub-class of this generalized family is the q-exponential family, where q is a real parameter describing the deformation of the exponential function. It is the intention of this talk to show the relevance for statistical physics of these generalizations of the BG-distribution. Particular attention will go to the configurational density of classical mono-atomic gases in the micro- canonical ensemble. These belong to the q-exponential family, where q tends to 1 as the number of particles tends to infinity. Hence, in this limit the density converges to the BG-distribution.
We review the Ising model with random-site or random-bond disorder, which has been controversial in both two and four dimensions. In the two-dimensional case, the controversy is between the strong universality hypothesis which maintains that the leading critical exponents are the same as in the pure case and the weak universality hypothesis, which favours dilution-dependent leading critical exponents. Here the random-site version of the model is subject to a finite-size scaling analysis, paying special attention to the implications for multiplicative logarithmic corrections. The analysis is supportive of the scaling relations for logarithmic corrections and of the strong scaling hypothesis in the 2D case. In the four-dimensional case unusual corrections to scaling characterize the model, and the precise nature of these corrections has been debated. Progress made in determining the correct 4D scenario is outlined.
Abdullahi Umar has discovered that many celebrated sequences of combinatorial numbers, including the factorials, binomial coefficients, Bell, Catalan, Schröder, Stirling and Lah numbers solve counting problems in certain naturally defined inverse semigroups of partial bijections on a finite set. I will give an account of some of these results, together with the beginning of a study of q-analogues where we consider linear bijections between subspaces of a finite vector space (and some very interesting open problems arise).
We are given a graph, now pick any involution and delete all of the vertices which are moved by this involution. Repeat with the new graph until your current graph is involution-free. This involution-free graph is uniquely defined (up to isomorphism) by the original, ie, it is independent of the choice of involution at each stage. This is proved using a lemma of Newman onthe confluence of reduction systems.
Abstract: In a seminal paper, Alon and Tarsi have introduced an algebraic
technique for proving upper bounds on the choice number of graphs (and thus,
in particular, upper bounds on their chromatic number). The upper bound on
the choice number of G obtained via their method, was later coined the
Alon-Tarsi number of G and was denoted by AT(G). They
have provided a combinatorial interpretation of this parameter in terms of
the eulerian sub-digraphs of an appropriate orientation of G.
Shortly afterwards, for the special case of line
graphs of d-regular d-edge-colorable graphs, Alon gave another
interpretation of AT(G), this time in terms of the signed d-colorings of
the line graph. In the talk I will generalize both results.
I will then use these results to prove some choosability results.
In the first part of the talk I will introduce chromatic, choice, and
Alon-Tarsi numbers of graphs.
In the second part I will state the two generalizations as well as
some applications.
The notion of residual and derived design of a symmetric design was introduced in a classic paper by R. C. Bose (1939). A quasi-residual (quasi-derived) design is a 2-design which has the parameters of a residual (derived) design. The embedding problem of a quasi-residual design into a symmetric design is an old and natural question. A Menon design of order h² is a symmetric (4h²,2h²-h, h²-h) design. Quasi-residual and quasi-derived designs of a Menon design have parameters 2-(2h²+h,h²,h²-h) and 2-(2h²-h,h²-h,h²-h-1), respectively.
We use regular Hadamard matrices to construct non-embeddable quasi-residual and quasi-derived Menon designs. As applications, the first two new infinite families of non-embeddable quasi-residual and quasi-derived Menon designs are constructed. This is a joint work with T. A. Alraqad.
The Ramsey number r_{k}(s,n) is the minimum N such that every red-blue coloring of the k-tuples of an N-element set contains either a red set of size s or a blue set of size n, where a set is called red (blue) if all k-tuples from this set are red (blue). Determining or estimating Ramsey numbers is one of the central problems in combinatorics. In this talk we discuss recent progress on several old and very basic hypergraph Ramsey problems.
Joint work with D. Conlon and J. Fox.
A tournament is an orientation of a complete graph. Sumner conjectured in 1971 that any tournament G on 2n-2 vertices contains any directed tree T on n vertices. Taking G to be a regular tournament on 2n-3 vertices and T to be an outstar shows that this conjecture, if true, is best possible. Many partial results have been obtained towards this conjecture.
In this talk I shall outline how a randomised embedding algorithm can be used to prove an approximate version of Sumner's conjecture, by first proving a stronger result for the case when T has bounded maximum degree. Furthermore, I will briefly sketch how by considering the extremal cases of this proof we may deduce that Sumner's conjecture holds for all sufficiently large n.
This is joint work with Daniela Kühn and Deryk Osthus.
Abstract: Fiala has shown with computer aid that there are 35 laws of length at most six and involving the product operation only which have
the property of the title (discounting renaming, cancelling, mirroring and symmetry). However, he has not provided humanly-comprehensible proofs of these facts.
We show that it is possible to give short understandable proofs of Fiala's results and to separate the loops and groups into classes.
I will introduce the problem of reconstructing population pedigrees from their subpedigrees (pedigrees of sub-populations) and present a construction of pairs of non-isomorphic pedigrees that have the same collection of sub-pedigrees. I will then show that reconstructing pedigrees is equivalent to reconstructing hypergraphs with isomorphisms from a suitably chosen group acting on the ground set. I will then discuss some ideas to characterize non-reconstructible pedigrees.
Crystals are certain labelled graphs which give a combinatorial understanding for certain representations of simple Lie algebras. Although crystals are known to exist for certain important representations, understanding what they look like is tricky, and an important theme in combinatorial representation theory is constructing models of crystals, where the vertices are given by simple combinatorial objects, with combinatorial rules for determining the edges.
I'll try to give a brief but comprehensible overview to motivate, and then concentrate on one particular crystal, for which there is a family of models based on partitions.
We study a problem of minimising the total number of zeros in the gaps between blocks of consecutive ones in the columns of a binary matrix by permuting its rows. The problem is known to be NP-hard. An analysis of the structure of an optimal solution, allows us to focus on a restricted solution space, and to use an implicit representation for searching the space. We develop an exact solution algorithm, which is polynomial if the number of columns is fixed, and two constructive heuristics to tackle instances with an arbitrary number of columns. The heuristics use a novel solution representation based upon column sequencing. In our computational study, all heuristic solutions are either optimal or close to an optimum. One of the heuristics is particularly effective, especially for problems with a large number of rows.
A transversal of a latin square is a selection of entries that hits each row, column and symbol exactly once. We can construct latin
squares whose transversals are constrained in various ways. For orders that are not twice a prime, these constructions yield
2-maxMOLS, that is, pairs of orthogonal latin squares that cannot be extended to a triple of MOLS. If only Euclid's theorem was false, we'd
have nearly solved the 2-maxMOLS problem.
For a family of subsets of {1,...,n}, ordered by inclusion, and a partially-ordered set P, we say that the family is P-free if it does not contain a subposet isomorphic to P. We are interested in finding ex(n,P), the largest size of a P-free family of subsets of [n]. It is conjectured that, for any fixed P, this quantity is (k+o(1))n(n-1)/2 for some fixed integer k, depending only on P.
Recently, Boris Bukh has verified the conjecture for P which are in a "tree shape". There are some other small posets Pfor which the conjecture has been verified. The smallest for which it is unknown is Q_{2}, the Boolean lattice on two elements. We will discuss the best-known upper bound for ex(n,Q_{2}) and an interesting open problem on graph theory that, if solved, would improve this bound. This is joint work with Maria Axenovich, Iowa State University and Jacob Manske, Texas State University.
Vivek Jain asked whether, when G is a finite group and H is a core-free subgroup of G, it is possible to generate G by a set of coset representatives of H in G. The answer is yes: the proof uses a result of Julius Whiston about the maximal size of an independent set in the symmetric group.
I will discuss the proof and some slight extensions, and will also talk about a parameter of a group conjecturally related to the maximum size of an independent set; this involves an open question about the subgroup lattices of finite groups.
The family of intervals of a binary structure on a set S satisfies well known properties:
A family of subsets of S with these properties is called weakly partitive.
An interval X is called strong provided that for each interval Y, if the intersection of X and Y is non-empty then Y is a subset of X or Y contains X. Using the notion of strong interval, and a study of the characteristics of elements of a weakly partitive family, Pierre Ille and I gave a proof in [1] of his result that given a weakly partitive family I on a set Sthere is a binary structure on S whose intervals are exactly the elements of I.
[1] Weakly partitive families on infinite sets, Pierre Ille and Robert E. Woodrow, Contributions to Discrete Mathematics, Vol 4, Number 1, 2009 pp. 54–79.
Classically, the Ising model in statistical physics is defined on a graph. But through the random cluster formulation we can make sense of the Ising partition function in the wider context of an arbitrary matroid. I expect most of the talk will be spent setting the scene. But eventually I'll come round to discussing the computational complexity of evaluating the partition function on various classes of matroids (graphic, regular and binary). I'm not a physicist nor a card-carrying matroid theorist, so the talk should be pretty accessible.
This is joint work with Leslie Goldberg (Liverpool).
A finite set X in some Euclidean space R^{n} is called Ramsey if for any k there is a d such that whenever R^{d} is k-coloured it contains a monochromatic set congruent to X. A long standing open problem is to characterise the Ramsey sets.
In this talk I will discuss the background to this problem, a new conjecture, and some group theoretic questions this new conjecture raises.
The perfect matching polytope of a graph G is the convex hull of the incidence vectors of all perfect matchings in G. We characterise bipartite graphs and near-bipartite graphs whose perfect matching polytopes have diameter 1.
We prove that almost surely a random graph process becomes Maker's win in the Maker-Breaker games ``k-vertex-connectivity'', ``perfect matching'' and ``Hamiltonicity'' exactly when its minimum degree first becomes 2k, 2 and 4 respectively.
See here [PDF 35KB].
Combinatorial representations are generalisations of linear representations of matroids based on functions over an alphabet. In this talk, we define representations of a family of bases (r-sets of an n-set). We first show that any family is representable over some finite alphabet. We then link this topic with design theory, and especially Wilson's theory of PBD-closed sets. This allows us to show that all graphs (r=2) can be represented over all large enough alphabets. If time permits, we finally give a characterisation of families representable over a given alphabet as subgraphs of a determined hypergraph.
A conference matrix is an n×n matrix C with zeros on the diagonal and entries ±1 elsewhere which satisfies CC^{T}=(n-1)I. Such a matrix has the maximum possible determinant given that its diagonal entries are zero and the other entries have modulus at most 1.
Conference matrices first arose in the 1950s in connection with conference telephony, and more recently have had applications in design of experiments in statistics. They have close connections with other kinds of combinatorial structure such as strongly regular graphs and Hadamard matrices.
It is known that the order of a conference matrix must be even, and that it is equivalent to a symmetric matrix if n is congruent to 2 (mod 4) or to a skew-symmetric matrix if n is congruent to 0 (mod 4). In the second case, they are conjectured to exist for all admissible n, but there are some restrictions in the first case (for example, there no conference matrices of order 22 or 34). Statisticians are interested to know what is the maximum possible determinant in cases where a conference matrix does not exist.
I will give a gentle introduction to the subject, and raise a recent open question by Dennis Lin.
The Graph Minors Project of Robertson and Seymour is one of the highlights
of twentieth-century mathematics. In a long series of mostly difficult papers
they prove theorems that give profound insight into the qualitative structure
of members of proper minor-closed classes of graphs. This insight enables
them to prove some remarkable banner theorems, one of which is that in
any innite set of graphs there is one that is a minor of the other; in other
words, graphs are well-quasi-ordered under the minor order.
A canonical way to obtain a matroid is from a set of columns of a matrix over
a eld. If each column has at most two nonzero entries there is an obvious
graph associated with the matroid; thus it is not hard to see that matroids
generalise graphs. Robertson and Seymour always believed that their results
were special cases of more general theorems for matroids obtained from
matrices over nite elds. For over a decade, Jim Geelen, Bert Gerards and
I have been working towards achieving this generalisation. In this talk I will
discuss our success in achieving the generalisation for binary matroids, that
is, for matroids that can be obtained from matrices over the 2-element eld.
In this talk I will give a very general overview of my work with Geelen
and Gerards. I will not assume familiarity with matroids nor will I assume
familiarity with the results of the Graph Minors Project.
A family of graphs F on a fixed set of n vertices is said to be triangle-intersecting if for any two graphs G,H in F, the intersection of G and H contains a triangle. Simonovits and Sos conjectured that such a family has size at most (1/8)2^{{n choose 2}}, and that equality holds only if F consists of all graphs containing some fixed triangle. Recently, the author, Yuval Filmus and Ehud Friedgut proved this conjecture, using discrete Fourier analysis, combined with an analysis of the properties of random cuts in graphs. We will give a sketch of our proof, and then discuss some related open questions.
All will be based on joint work with Yuval Filmus (University of Toronto) and Ehud Friedgut (Hebrew University of Jerusalem).
See here [PDF 91KB].
Ge and Stefankovic recently introduced a novel two-variable graph polynomial. When specialised to a bipartite graphs G and evaluated at the point (1/2,1), the polynomial gives the number of independent sets in the graph. Inspired by this polynomial, they also introduced a Markov chain which, if rapidly mixing, would provide an efficient sampling procedure for independent sets in G. The proposed Markov chain is promising, in the sense that it overcomes the most obvious barrier to mixing. Unfortunately, by exhibiting a sequence of counterexamples, we can show that the mixing time of their Markov chain may be exponential in the size of the instance G.
I'll play down the complexity-theoretic motivation for this investigation, and concentrate on the combinatorial aspects, namely the graph polynomial and the construction of the counterexamples.
This is joint work with Leslie Ann Goldberg (Liverpool). A preprint is available as arXiv:1109.5242.
An elementary problem when writing a computer program is how to swap the contents of two variables. Although the typical approach consists of using a buffer, this operation can actually be performed using XOR without memory. In this talk, we aim at generalising this approach to compute any function without memory.
We introduce a novel combinatorial framework for procedural programming languages, where programs are allowed to update only one variable at a time without the use of any additional memory. We first prove that any function of all the variables can be computed in this fashion. Furthermore, we prove that any bijection can be computed in a linear number of updates. We conclude the talk by going back to our seminal example and deriving the exact number of updates required to compute any manipulation of variables.
I will talk about some work of Ian Wanless and his student Joshua Browning, and some further work that Ian and I did last month.
We are interested in the maximum number of subsquares of order m which a Latin square of order n can have, where we regard m as being fixed and n as varying and large. In many cases this maximum is (up to a constant) a power n^{r}, for some exponent r depending on m. However, we cannot prove that this always holds; the smallest value of m for which it is not known is m = 7.
A related problem concerns the maximum number of Latin squares isotopic to a fixed square of order m.
We shall use a theorem of probability to prove a geometrical result, which when applied in an analytical context yields an interesting and surprisingly strong result in combinatorics on the existence of long arithmetic progressions in sums of two sets of integers. For the sake of exposition, we might focus on a version of the final result for vector spaces over finite fields: if A is a subset of F_{q}^{n} of some fixed size, then how large a subspace must A+A contain?
Joint work with Ernie Croot and Izabella Laba.
Let H be a graph. The function ex(n,H) is the maximum number of edges that a graph with n vertices can have, which contains no subgraph isomorphic to H.
If H is not bipartite then the asymptotic behaviour of ex(n,H) is known, but if H is bipartite then in general this is not the case. This talk will focus on the case that H is a complete bipartite graph. I will review the previous constructions from a geometrical point of view and explain how this enables us to improve the lower bound of ex(n,K_{5,5}).
There are only a few methods for analysing the rate of convergence of an ergodic Markov chain to its stationary distribution. One is the canonical path method of Jerrum and Sinclair. This method applies to Markov chains which have no negative eigenvalues. Hence it has become standard practice for theoreticians to work with lazy Markov chains, which do absolutely nothing with probability 1/2 at each step. This must be frustrating for practitioners, who want to use the most efficient Markov chain possible.
I will discuss how laziness can be avoided by the use of a twenty-year old lemma of Diaconis and Stroock's, or my recent modification of that lemma. As an illustration, I will apply the new lemma to Jerrum and Sinclair's well-known chain for sampling perfect matchings in a bipartite graph.
A typical result in graph theory can be read as following: under certain conditions, a given graph G has some property P. For example, a classical theorem of Dirac asserts that every n-vertex graph G of minimum degree at least n/2 is Hamiltonian, where a graph is called Hamiltonian if it contains a cycle that passes through every vertex of the graph.
Recently, there has been a trend in extremal graph theory where one revisits such classical results, and attempts to see how strongly G possesses the property P. In other words, the goal is to measure the robustness of G with respect to P. In this talk, we discuss several measures that can be used to study robustness of graphs with respect to various properties. To illustrate these measures, we present three extensions of Dirac's theorem.
Graphs and digraphs behave quite differently, and many classical results for graphs are often trivially false when extended to general digraphs. Therefore it is usually necessary to restrict to a smaller family of digraphs to obtain meaningful results. One such very natural family is Eulerian digraphs, in which the in-degree equals out-degree at every vertex.
In this talk, we discuss several natural parameters for Eulerian digraphs and study their connections. In particular, we show that for any Eulerian digraph G with n vertices and m arcs, the minimum feedback arc set (the smallest set of arcs whose removal makes G acyclic) has size at least m^{2}/2n^{2} + m/2n, and this bound is tight. Using this result, we show how to find subgraphs of high minimum degrees, and also long cycles in Eulerian digraphs. These results were motivated by a conjecture of Bollobas and Scott.
Joint work with Ma, Shapira, Sudakov and Yuster.
This talk is on the combinatorics of partitions. Given a positive integer s, the set of s-cores is a highly structured subset of the set of all partitions, which is important in representation theory. I'll take two positive integers s,t, and define a set of partitions which includes both the set of s-cores and the set of t-cores, and is somehow supposed to be the appropriate analogue of the union of these two sets.
This work is somewhat unfinished, and needs a new impetus. So I'll be hoping for some good questions!
A 2-dimensional framework is a straight line realisation of a graph in the Euclidean plane. It is radically solvable if the set of vertex coordinates is contained in a radical extension of the field of rationals extended by the squared edge lengths. We show that the radical solvability of a generic framework depends only on its underlying graph and characterise which planar graphs give rise to radically solvable generic frameworks. We conjecture that our characterisation extends to all graphs. This is joint work with J. C. Owen (Siemens).
Ecological occurrence matrices, such as Darwin finches tables, are 0,1-matrices whose rows are species of animals and colums are islands, and the (i,j) entry is 1 if animal i lives in island j, and is 0 otherwise. Moreover the row sums and columns sums are fixed by field observation of these islands. These occurence matrices are thus just bipartite graphs G with a fixed degree sequence and where V_{1}(G) is the set of animals and V_{2}(G) is the set of islands. The problem is, given an occurrence matrix, how to tell whether the distribution of animals is due to competition or to chance. Thus, researchers in Ecology are highly interested in sampling easily and uniformly ecological occurrence tables so that, by using Monte Carlo methods, they can approximate test statistics allowing them to prove or disprove some null hypothesis about competitions amongst animals.
Several algorithms are known to construct realizations on n vertices and m edges of a given degree sequence, and each one of them has its strengths and limitations. Most of these algorithms can be fitted in two categories: MonteCarlo Markov chains methods that are based on edge-swappings, and sequential sampling methods that are based on starting from an empty graph on n vertices and adding edges sequentially according to some probability scheme. We present a new algorithm that samples uniformly all simple bipartite realizations of a degree sequence and whose basic ideas may be seen as implementing a dual sequential method, as it inserts sequentially vertices instead of edges.
The running time of our algorithms is O(m) where m is the number od edges in any realization. The best algorithms that we know of are the one implicit in [1] that has a running time of O(ma_{max} where a_{max} is the maximum of the degrees, but is not uniform. Similarly, the algorithm presented by Chen et al. [3] does not sample uniformly, but nearly uniformly. Moreover the edge-swapping Markov Chains pionneered by Brualdi [2] and Kannan et al. [5], and much used by reseachers in Ecology, have just been proven in [4] to be fast mixing for semi-regular degree sequences only.
This is the first in a short series inspired by the talks by Terence Chan at our recent workshop on "Information flows and information bottlenecks". No familiarity with the talks will be assumed.
I will define the entropy function of a family of random variables on a finite probability space. I will prove Chan's theorem that it can be approximated (up to a scalar multiple) by the entropy function obtained when G is a finite group (carrying the uniform distribution) and the random variables are associated with a family of subgroups of G: the random variable associated with H takes a group element to the coset of H containing it.
This is the second in a short series inspired by the talks by Terence Chan at our recent workshop on "Information flows and information bottlenecks". No familiarity with the talks will be assumed.
A partition is uniform if all its parts have the same size. I will define orthogonality of partitions, and interpret orthogonality in terms of the entropy of the associated random variables. I will explain how a sublattice of the partition lattice consisting of mutually orthogonal uniform partitions gives rise to an association scheme.
Since the foundational results of Thomason and Chung-Graham-Wilson on quasirandom graphs over 20 years ago, there has been a lot of effort by many researchers to extend the theory to hypergraphs. I will present some of this history, and then describe our recent results that provide such a generalization and unify much of the previous work. One key new aspect in the theory is a systematic study of hypergraph eigenvalues. If time permits I will show some applications to Sidorenko's conjecture and the certification problem for random k-SAT. This is joint work with John Lenz.
This talk will continue the discussion from previous talks in the series.
Fix a prime p. Starting with any finite undirected graph G, pick an automorphism of G of order p and delete all the vertices that are moved by this automorphism. Apply the same procedure to the new graph, and repeat until a graph G* is reached that has no automorphisms of order p. Is the reduced graph G* uniquely defined (up to isomorphism) by G? I..e., is G* independent of the sequence of automorphisms chosen?
In a CSG in 2010 John Faben showed that the answer is "yes'' in the special case p = 2 (i.e., reduction by involutions) using Newman's Lemma on confluence of reduction systems. Later, he noticed that the general case can be handled using the so-called Lovász vector of a graph. I'll prove the general result and sketch some consequences to the extent that time allows.
A derangement is a permutation with no fixed points.
An elementary theorem of Jordan asserts that a transitive permutation group of degree n>1 contains a derangement. Arjeh Cohen and I showed that in fact at least a fraction 1/n of the elements of the group are derangements. So there is a simple and efficient randomised algorithm to find one: just keep picking random elements until you succeed.
Bill Kantor improved Jordan's theorem to the statement that a transitive group contains a
derangement of prime power order. The theorem is constructive but requires the classification of finite simple groups. Emil Vaughan showed that Kantor's theorem yields a polynomial-time (but not at all straightforward) algorithm for finding one.
This month, Vikraman Arvind from Chennai posted a paper on the arXiv giving a very simple deterministic polynomial-time algorithm to find a derangement in a transitive group. The proof is elementary and combinatorial.
To what extent is the spectrum of a matrix determined by its "structure"? For example, what claims can be made simultaneously about all matrices in some qualitative class (i.e. with some fixed sign pattern)? Qualitative classes are naturally associated with signed digraphs or signed bipartite graphs, and some nice theory relates matrix spectra to structures in these graphs. But there are more exotic ways of associating matrix-sets, not necessarily qualitative classes, with graphs (perhaps directed, signed, etc), and extracting information from the graphs. In applications, a quick graph-computation may then suffice to make surprising claims about a family of systems. I'll talk about some recent results and open problems in this area, focussing in particular on the use of compound matrices.
Metaalgorithms for deciding properties of combinatorial structures have recently attracted a significant amount of attention. For example, the famous theorem of Courcelle asserts that every property definable in monadic second order logic can be decided in linear time for graphs with bounded tree-width.
We focus on deciding simpler properties, those definable in first order (FO) logic. In the case of graphs, FO properties include the existence of a subgraph or a dominating set of a fixed size. Classical results include the almost linear time algorithm of Frick and Grohe which applies to graphs with locally bounded tree-width. In this talk, we first survey commonly applied techniques to design FPT algorithms for FO properties. We then focus on one class of graphs, intersection graphs of intervals with finitely many lengths, where these techniques do not seem to apply in a straightforward way, and we design an FPT algorithm for deciding FO properties for this class of graphs.
The talk contains results obtained during joint work with Ganian, Hlineny, Obdrzalek, Schwartz and Teska.
Suppose various processors in a network wish to reach agreement on a particular decision. Unfortunately, some unknown subset of these may be under the control of a malicious adversary who desires to prevent such an agreement being possible.
To this end, the adversary will instruct his "faulty" processors to provide inaccurate information to the non-faulty processors in an attempt to mislead them. The aim is to construct an "agreement protocol" that will always foil the adversary and enable the non-faulty processors to reach agreement successfully (perhaps after several rounds of communication).
In traditional agreement problems, it is usually assumed that the set of faulty processors is "static", in the sense that it is chosen by the adversary at the start of the process and then remains fixed throughout all communication rounds. In this talk, we shall instead focus on a "mobile" version of the problem, providing results both for the case when the communications network forms a complete graph and also for the general case when the network is not complete.
In 1983, Allan Schwenk posed a problem in the American Mathematical Monthly asking whether the edge set of the complete graph on ten vertices can be decomposed into three copies of the Petersen graph. He, and O. P. Lossers (the problem-solving group at Eindhoven University run by Jack van Lint – "oplossers" is Dutch for "solvers") gave a negative solution in 1987. This year, Sebastian Cioaba and I considered the question: for which m is it possible to find 3m copies of the Petersen graph which cover the complete graph m times. We were able to show that this is possible for all natural numbers m except for m = 1. I will discuss the proof, which involves three parts: one uses linear algebra, one uses group theory, and one is bare-hands.
Of course this problem can be generalised to an arbitrary graph G: Given a graph G on n vertices, for which integers mcan one cover the edges of K_{n} m times by copies of G? I will say a bit about what we can do, and pose some very specific problems.
Combinatorial species of structure has been a subject which has had a great impact on Statistical Mechanics, especially through the use of generating functions. It has been described as a Rosetta stone for the key models of Statistical Mechanics (Faris 08) through the way in which it has the capacity to abstract and generalise many of the key features in Statistical Mechanical Models. The talk will focus on developing the main notions of these species of structure and the algebraic identity called Lagrange-Good inversion, a method of finding the coefficients of an inverse power series. I will introduce some of the key concepts of Statistical Mechanics which indicate how they can be understood in the context of the combinatorial tools we have. These interpretations also indicate some interesting combinatorial identities. The final emphasis is on how the Lagrange-Good inversion can help us to obtain a virial expansion for a gas comprising of many types of particle, as was used in a recent paper (Jansen, T. Tsagkarogiannis, Ueltschi).
We examine the structure of 1-extendable graphs G which have no even F-orientation where F is a fixed 1-factor of G. In the case of regular graphs, graphs of connectivity at least four and of graphs of maximum degree three, a characterization is given.
Terminology A graph G is 1-extendable if every edge belongs to at least one 1-factor. An orientation of a graph G is an assignment of a "direction" to each edge of G. Now suppose that G has a 1-factor F. Then an even F-orientation of G is an orientation in which each F-alternating cycle has exactly an even number of edges directed in the same fixed direction around the cycle.
The Brouwer fixed point theorem and the Borsuk-Ulam theorem are beautiful and well-known theorems of topology that admit combinatorial analogues: Sperner's lemma and Tucker's lemma. In this talk, I will trace recent connections and generalizations of these combinatorial theorems, including applications to the social sciences.
Consider two strict weak orders (that is irreflexive, transitive, non-total relations) on the same finite set. How similar are the two? This question is motivated by the statistical question of association between two rankings which contain ties. In order to assess the similarity of the orders I will present an approach where the lack of agreement is assessed by counting the number of certain operations that are needed to transform one weak order into the other. The resulting measure is a symmetric and positive definite function but does not satisfy the triangle inequality. Hence, technically, it is a distance but not a metric. So far the proposed distance can only be computed recursively. Input from the audience which would help me to derive a closed form solution and pointers to related "pure" literature I am not aware of will be greatly appreciated.
Guessing game is a variant of "guessing your own hat" game and can be played on any simple undirected graph. The aim of this game is to maximise the probability of the event that all players guess correctly their own value without any communication. The fractional clique cover strategy for playing the guessing game was developed by Christofides and Markstrom and was conjectured to be the optimal strategy. In this talk, we will construct some counterexamples to this conjecture.
Acyclic orientations of a graph arise in various applications, including heuristics for colouring. The number of acyclic orientations is an evaluation of the chromatic polynomial. Stanley gave a formula for the average number of acyclic orientations of graphs with n vertices and m edges. Recently we have found the graphs with the minimum number of acyclic orientations, but the more interesting question about the maximum number is still open.
The regular complete bipartite graph (on an even number of vertices) is thought to maximise the number of acyclic orientations. Unexpectedly, the number turns out to be a poly-Bernoulli number, one of a family of numbers connected with polylogarithms. We will try to explain these connections.
An edge-regular graph with parameters (v,k,t) is a regular graph of order v and valency k, such that every edge is in exactly t triangles, and a clique in a graph is a set of pairwise adjacent vertices. I will apply a certain quadratic "block intersection" polynomial to obtain information about cliques in an edge-regular graph with given parameters.
Software testing makes use of combinatorial designs called covering arrays. These arrays are a generalization of Latin Squares and orthogonal arrays. Idealy we look to use the smallest possible array for the given parameters, but this is a hard problem. We define a family of graphs, partition graphs, which give a full characterization of optimal covering arrays using homomorphisms. We investigate these graphs and are able to determine the diameter, and for some subfamilies, the clique and chromatic number and homomorphic core of these graphs.
There are many open problems involving these graphs
One can (for the most part) formulate a model of a classical system in either the Lagrangian or the Hamiltonian framework. Though it is often thought that those two formulations are equivalent in all important ways, this is not true: the underlying geometrical structures one uses to formulate each theory are not isomorphic. This raises the question whether one of the two is a more natural framework for the representation of classical systems. In the event, the answer is yes: I state and prove two technical results, inspired by simple physical arguments about the generic properties of classical systems, to the effect that, in a precise sense, classical systems evince exactly the geometric structure Lagrangian mechanics provides for the representation of systems, and none that Hamiltonian mechanics does. The argument clarifies the conceptual structure of the two systems of mechanics, their relations to each other, and their respective mechanisms for representing physical systems.
In 1998 Hastings and Levitov proposed a model for planar random growth such as diffusion-limited aggregation (DLA) and the Eden model, in which clusters are represented as compositions of conformal mappings. I shall introduce an anisotropic version of this model, and discuss some of the natural scaling limits that arise. I shall show that very different behaviour can be seen in the isotropic case, and that here the model gives rise to a limit object known as the Brownian web.
I will present a result which gives a characterization of the law of the partition function of a Brownian directed polymer model in terms of the eigenfunctions of the quantum Toda lattice, and has close connections to random matrix theory.
The law of elastic reflection by a smooth mirror surface is well known: the angle of incidence is equal to the angle of reflection. In contrast, the law of elastic scattering by a rough surface is not unique, but depends on the shape of microscopic pits and groves forming the roughness. In the talk we will give the definition of a rough surface and provide a characterisation for laws of scattering by rough surfaces. We will also consider several problems of optimal resistance for rough bodies and discuss their relationship with Monge-Kantorovich optimal mass transfer. These problems can be naturally interpreted in terms of optimal roughening of the surface for artificial satellites on low Earth orbits.
Hidden Markov Models (HMMs) are a commonly used tool for inference of transcription factor (TF) binding sites from DNA sequence data. We exploit the mathematical equivalence between HMMs for TF binding and the "inverse" statistical mechanics of hard rods in a one-dimensional disordered potential to investigate learning in HMMs. We derive analytic expressions for the Fisher information, a commonly employed measure of confidence in learned parameters, in the biologically relevant limit where the density of binding sites is low. This allows us to formulate a simple criteria for when it is possible to distinguish between binding sites of closely related TFs and derive a scaling relation relating the quantity of training data to the minimum energy (statistical) difference between TFs that one can resolve. We apply our formalism to the NF-$\kappa$B TF-family and find that it is composed of two related but statistically distinct sub-families.
Since the seminal work by Ott et al., the concept of controlling chaos have gathered much attention and several techniques have been proposed. Among those control methods, delayed feedback control is of interest for its applicability and tractability for analysis. In this talk, we propose a parametric delayed feedback control where delay time is adaptively changed by the state of the system. Unlike the conventional chaos control, we are able to obtain super-stable periodic orbits. From the viewpoints of dynamical systems, the whole controlled system becomes a particular two dimensional system with multiple attractors in the sense of Milnor. Finally, I would like to mention a possible application of this control technique for a coding scheme.
We consider ensembles of trajectories associated with large deviations of time-integrated quantities in stochastic models. Motivated by proposals that these ensembles are relevant for physical processes such as shearing and glassy relaxation, we show how they can be generated directly using auxiliary stochastic processes. We illustrate our results using the Glauber-Ising chain, for which energy-biased ensembles of trajectories can exhibit ferromagnetic ordering, and briefly discuss the relation between such biased ensembles and quantum phase transitions. The talk will conclude with a wish list of things we'd like to work out but so far haven't been able to.
Nature is rich with many different examples of the cohesive motion of animals. Individual-based models are a popular and promising approach to explain features of moving animal groups such as flocks of birds or shoals of fish. Previous models for collective motion have primarily focused on group behaviours of identical individuals, often moving at a constant speed. In contrast we put our emphasis on modelling the contributions of different individual-level characteristics within such groups by using stochastic asynchronous updating of individual positions and orientations. Recent work has highlighted the importance of speed distributions, anisotropic interactions and noise in collective motion. We test and justify our modelling approach by comparing simulations to empirical data for fish, birds and insects. The techniques we use range from motion tracking to "equation-free" coarse-grained modelling. With the maturation of the field new exciting applications are possible for models such as ours.
In equilibrium statistical mechanics macroscopic observables are calculated as averages over statistical ensembles, which represent probability distributions of the microstates of the system under given constraints. Away from equilibrium ensemble theory breaks down due to the strongly dissipative nature of non-equilibrium steady states, where, for example, energy conservation no longer holds in general. Nevertheless, ensemble approaches can be useful in describing the statistical mechanics of non-equilibrium systems, as I discuss in this talk. Two different approaches are presented: (i) a theory of microscopic transition rates in sheared steady states of complex fluids and (ii) a statistical theory for jammed packings of non-spherical objects. In both cases the ensemble approach relies crucially on an assumption of ergodicity in the absence of equilibrium thermalization.
Organisers: Rosemary J. Harris and Hugo Touchette
Much effort has focused recently on developing models of stochastic systems that are non-Markovian or show long-range correlations in space or time, or both. The need for such models has come from many different fields, ranging from mathematical finance to biophysics, and from engineering to statistical mechanics.
This workshop will bring together a number of mathematicians and engineers interested in stochastic processes having long-range correlations, with a view to share ideas as to how we can define such correlations mathematically, as well as to how we can devise stochastic processes that go beyond the Markov model.
The meeting is part of the CoSyDy series, a London Mathematical Society Scheme 3 network bringing together UK mathematicians investigating Complex Systems Dynamics.
Schedule:
12:15-13:05 | Buffet lunch and welcome |
13:05-13:10 | Welcome |
13:10-13:55 | Thierry Bodineau, Departement de Mathematiques et Applications, ENS, Paris |
Long range correlations in non-equilibrium systems | |
14:00-14:25 | Robert Jack, Department of Physics, University of Bath |
Large deviations, glass transitions, and long-ranged correlations | |
14:30-14:55 | Robert Concannon, School of Physics & Astronomy, University of Edinburgh |
A non-Markovian Asymmetric Simple Exclusion Process | |
15:00-15:25 | Tea and coffee |
15:30-16:15 | Sergei Fedotov, School of Mathematics, The University of Manchester |
Long-memory effects in anomalous diffusion with reactions | |
16:20-16:45 | Raul Mondragon, Department of Electronic Engineering, Queen Mary, University of London |
Long-range correlations in queues |
See the full programme with abstracts in attachment.
All are welcome. Registration is not required, but for catering purposes we would appreciate if you could confirm your attendance to the organisers.
Attachment | Size |
---|---|
cosydy-qmul2011plan.pdf [PDF 67KB] | 67.79 KB |
Gonzalez-Tokman, Hunt and Wright studied a metastable expanding system which is described by a piecewise smooth and expanding interval map. It is assumed that the metastable map has two invariant sub-intervals and exactly two ergodic invariant densities. Due to small perturbations, the system starts to allow for infrequent leakage through subsets (called holes) of the initially invariant sub-intervals, forcing the two invariant sub-systems to merge into one perturbed system which has exactly one invariant density. It is proved that the unique invariant density of the perturbed interval map can be approximated by a particular convex combination of the two invariant densities of the original interval map, with the weights in the combination depending on the sizes of the holes.
In this talk we will present analogous results in two cases: 1. intermittent interval maps; 2. Randomly perturbed expanding maps.
This talk is about recurrence time statistics for chaotic maps with strange attractors - focusing on the probability distributions that describe the typical recurrence statistics to certain subsets of the phase space. The limiting probability distributions depend on the geometry of the (chaotic) attractor, the dimension of the SRB measure on the attractor, and the observables on the system.
We investigate the problem of Diophantine approximation on rational surfaces using ergodic-theoretic techniques. It turns out that this problem is closely related to the asymptotic distribution of orbits for a suitably constructed dynamical system. Using this connection we establish analogues of Khinchin's and Jarnik's theorems in our setting.
The joint spectral radius (JSR) of a finite set of real d × d matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth.
The purpose of this talk is to present the first completely explicit family of 2 × 2 matrices which do not possess the finiteness property. Time permitting, I will also mention recent advances concerning maximizing sequences (those which realize the JSR) of polynomial complexity.
This talk investigates the effect of network topology on the fair allocation of network resources among a set of agents, an all-important issue for the efficiency of transportation networks all around us. We analyse a generic mechanism that distributes network capacity fairly among existing flow demands, and describe some conditions under which the problem can be solved by semi-analytical methods. We find that, for some regions of the parameter space, a fair allocation implies a decrease of at least 50% from maximum throughput. We also find that the histogram of the flow allocations assigned to the agents decays as a power-law with exponent -1. Our semi-analytical framework suggests possible explanations for the well-known reduction of throughput in fair allocations. It also suggests that the network topology can lead to highly uneven (but fair) distributions of resources, a remark of caution to network designers
We study the effect of external forcing on the saddle-node bifurcation pattern of interval maps. Replacing fixed points of unperturbed maps by invariant graphs, we obtain direct analogues to the classical result both in a measure-theoretic and a topological setting. As an interesting new phenomenon, a dichotomy appears for the behaviour at the bifurcation point, which allows the bifurcation to be either "smooth" (as in the classical case) or "non-smooth".
A recent study into the geometry underlying discontinuities in dynamics revealed some surprises. The problems of interest are fundamental, things like: frictional sticking, electronic switching, protein activation and neuron spiking. When a discontinuity occurs at some threshold value in a system of differential equations, the solutions that result might not be unique. Besides the myriad cute models from applications, we want to know what discontinuities really tell us about dynamics in the real world. Non-unique solutions are easily dismissed as unphysical, yet they tell us something about the extreme behaviour made possible in the limit as a sudden change becomes almost discontinuous. Initially unique solutions may become multi-valued, revealing extreme sensitivity to initial conditions, a breakdown of determinism, yet the possible outcomes lie in a well-defined set: an "explosion". An intriguing connection between discontinuities and singularly perturbations is revealed by studying the so-called two-fold singularities and canards, borrowing ideas from nonstandard analysis along the way. The outcomes have been seen in superconductor experiments, are possible in control circuits, they are hidden in plain sight in the dynamics of friction, impacts, and neuron spiking, and they lead to non-deterministic forms of chaos.
The macroscopic behaviour of microscopically defined particle models are investigated by equation-free techniques where no explicitly given equations are available for the macroscopic quantities of interest. We investigate situations with an intermediate number of particles where the number of particles is too large for microscopic investigations of all particles and too small for analytical investigations using many-particle limits and density approximations. By developing and combining very robust numerical algorithms, it was possible to perform an equation-free numerical bifurcation analysis of macroscopic quantities describing the structure and pattern formation in particle models. The approach will be demonstrated for two examples from traffic and pedestrian flow. The presented traffic flow on a single lane highway shows besides uniform flow solutions also traveling waves of high density regions. Bifurcations and co-existence of these two solution types are investigated. The pedestrian flow shows the emergence of an oscillatory pattern of two crowds passing a narrow door in opposite directions. The oscillatory solutions appear due to a Hopf bifurcation. This is detected numerically by an equation-free continuation of a stationary state of the system. Furthermore, an equation-free two-parameter continuation of the Hopf point is performed to investigate the oscillatory behaviour in detail using the door width and relative velocity of the pedestrians in the two crowds as parameters.
I discuss the synchronization of cows using both an agent-based model and then formulate a mechanistic model for the daily activities of a cow (eating, lying down, and standing) in terms of a piecewise smooth dynamical system. I analyze the properties of this bovine dynamical system and develop an exact integrative form as a discrete-time mapping. I then couple multiple cow "oscillators" together to study synchrony and cooperation in cattle herds. With this abstract approach, I not only investigate equations with interesting dynamics but also develop interesting biological predictions. In particular, the model illustrates that it is possible for cows to synchronize less when the coupling is increased.
The joint spectral radius of a finite set of square matrices is defined to be the maximum possible exponential growth rate of products of matrices drawn from that set. In joint work with Nikita Sidorov, Kevin Hare and Jacques Theys, we examine a certain one-parameter family of pairs of matrices in detail, showing that the matrix products which realise this optimal growth rate correspond to Sturmian sequences with a particular characteristic ratio. We investigate the dependence of this characteristic ratio on the parameter, and show that it takes the form of a Devil's staircase. We establish some fine properties of this Devil's staircase, answering a question posed by T. Bousch.
The decay of classical temporal correlations represents a fundamental issue in dynamical systems theory, and, in the generic setting of systems with a mixed phase space, it still presents a remarkable amount of open problems. We will describe prototype systems where the main questions arise, and discuss some recent progress where polynomial mixing rates are linked to large deviations estimates.
In this talk I will introduce the issue of the emergence of cooperation, identified by Science as one of the 25 most important problems for the 21st century. I will discuss the puzzle that cooperative behavior is in the light of evolution theory and the importance of cooperation in its major steps. Then I will present the main tool with which one can study this problem, namely game theory. I will review games played by two players and their classical and evolutionary versions. Finally, I will devote some time to recent experiments addressing the relevance of the structure of the evolving population for the emergence of cooperation.
This talk will discuss a stability index that characterises the local geometry of the basin of attraction for a dynamical system. The index is of particular interest for attractors that are not asymptotically stable - such attractors are known to arise robustly, for example, as heteroclinic cycles in systems with symmetries.
Heterogeneity is a ubiquitous aspect of many social and economic complex systems. The analysis and modeling of heterogeneous systems is quite difficult because each economic and social actor is characterized by different attributes and it is usually acting on a multiplicity of time scales. We use statistically validated networks [1], a recently introduced method to validate links in a bipartite system, to investigate heterogeneous social and economic systems. Specifically, we investigate the classic movie-actor system [1] and the trading activity of individual investors of Nokia stock [2]. The method is unsupervised and allows constructing networks of social actors where the links indicate co-occurrence of events or decisions. Each link is statistically validated against a null hypothesis taking into account system heterogeneity. Community detection is performed on the statistically validated networks and the communities (partitions) obtained are investigated with respect to the over-expression or under-expression of the attributes characterizing the social actors and/or their activities [3].
[1] Michele Tumminello, Salvatore Miccichè, Fabrizio Lillo, Jyrki Piilo, Rosario N Mantegna (2011) Statistically Validated Networks in Bipartite Complex Systems. PLoS ONE 6(3): e17994.
[2] Michele Tumminello, Fabrizio Lillo, Jyrki Piilo, and Rosario N. Mantegna, Identification of clusters of investors from their real trading activity in a financial market (2012) New J. Phys. 14 013041
[3] Michele Tumminello , Salvatore Miccichè , Fabrizio Lillo , Jan Varho , Jyrki Piilo and Rosario N Mantegna, Community characterization of heterogeneous complex systems (2011) J. Stat. Mech. P01019
In this talk we will discuss the application of the Fluctuation Theorem (FT) on systems where the heat bath is out of equilibrium. We first recall the main properties of Fluctuation Theorem (FT) starting from experimental results. We then discuss the result of an experiment where we measure the energy fluctuations of a Brownian particle confined by an optical trap in an aging gelatin after a very fast quench (less than 1 ms). The strong non-equilibrium fluctuations due to the assemblage of the gel, are interpreted, within the framework of (FT) , as a heat flux from the particle towards the bath. We derive an analytical expression of the heat probability distribution, which fits the experimental data and satisfies a fluctuation relation similar to that of a system in contact with two baths at different temperatures. We finally show that the measured heat flux is related to the violation of the equilibrium Fluctuation Dissipation Theorem for the system.
The new faces of the Feigenbaum point: Dynamical hierarchy, self-similar network, theoretical game and stationary distribution. In this talk we first show that the recently revealed features of the dynamics toward the Feigenbaum attractor form a hierarchical construction with modular organization that leads to a clear-cut emergent property. Then we transcribe the well-known Feigenbaum scenario into families of networks via the horizontal visibility algorithm, derive exact results for their degree distributions, recast them in the context of the renormalization group and find that its fixed points coincide with those of network entropy optimization. Next we study a discrete-time version of the replicator equation for two-strategy theoretical games. Their stationary properties differ from those of continuous time for sufficiently large values of the parameters, where periodic and chaotic behavior replaces the usual fixed-point population solutions. We observe the familiar period-doubling and chaotic-band-splitting attractor cascades of unimodal maps. Finally, we look at the limit distributions of sums of deterministic chaotic variables in unimodal maps and find a remarkable renormalization group structure associated with the operation of increment of summands and rescaling. In this structure—where the only relevant variable is the difference in control parameter from its value at the transition to chaos—the trivial fixed point is the Gaussian distribution and a novel nontrivial fixed point is a multifractal distribution that emulates the Feigenbaum attractor.
I will discuss recent results concerning topological invariants for Henon-like maps in dimension two using the Renormalisation apparatus constructed by de Carvalho, Lyubich, Martens and myself.
For details see the workshop webpage
Evolutionary dynamics have been traditionally studied in infinitely large homogeneous populations where each individual is equally likely to interact with every other individual. However, real populations are finite and characterised by complex interactions among individuals. Over the last few years there has been a growing interest in studying evolutionary dynamics in finite structured populations represented by graphs. An analytic approach of the evolutionary process is possible when the contact structure of the population can be represented by simple graphs with a lot of symmetry and lack of complexity. Such graphs are the complete graph, the circle and the star graph. Moreover, this is usually infeasible on complex graphs and the use of various assumptions and approximations is necessary for the exploration of the process. We propose a powerful method for the approximation of the evolutionary process in populations with a complex structure. Comparisons of the predictions of the model constructed with the results of computer simulations reveal the effectiveness of the process and the improved accuracy that it provides when compared to well-known pair approximation methods.
I will explain how one can track unstable periodic orbits in experiments using non-invasive feedback control in the spirit of Pyragas' time-delayed feedback. In some (experimentally very common) situations one can achieve non-invasiveness of the control without subtracting a delayed term in the control and without having to apply Newton iterations. I will show some recent experimental results of David Barton, who was able to trace out a resonance surface of a mechanical nonlinear oscillator around a cusp in two parameters with high accuracy.
"Music exists in an infinity of sound. I think of all music as existing in the substance of the air itself. It is the composer's task to order and make sense of sound, in time and space, to communicate something about being alive through music." ~ Libby Larsen
It is the performer's task then to intuit this order, to make sense of the music -- in ways that may augment or be different from the composer's own understanding -- and to communicate the interpreted structure through prosodic cues to the listener. Just as physicists develop mathematical models to make sense of the world in which we live, music science researchers seek mathematical models to represent and manipulate music structures, both frozen in time (e.g. as mapped out in a score), or communicated in performance. Mathematics is also the glue that binds music to digital representations, allowing for large-scale computations carried out by machines.
I shall begin by introducing some of my own work originating in music structure representation and analysis, then move on to more recent investigations into aspects of music prosody. A key element of this talk will be the posing of some open problems in the scientific study of music structure and expressive performance, in which I hope to solicit interest, and to which I shall invite responses.
Many turbulent flows undergo sporadic random transitions after long periods of apparent statistical stationarity. A straightforward study of these transitions, through direct numerical simulation of the governing equations is nearly always impracticable. In this talk, we consider two-dimensional and geostrophic turbulence models with stochastic forces in regimes where two or more attractors coexist. We propose a non-equilibrium statistical mechanics approach to the computation of rare transitions between two attractors. Our strategy is based on the large deviation theory for stochastic dynamical systems (Freidlin-Wentzell theory) derived from a path integral representation of the stochastic process.
It turns out that one-dimensional probability distributions of annihilating Brownian motions on the real line is a Pfaffian point process. It also turns out that this Pfaffian point process describes the one-dimensional statistics of real eigenvalues in Ginibre ensemble of random matrices. Is the real sector of Ginibre ensemble equivalent to annihilating Brownian motions as a stochastic process?
The metric theory of Diophantine approximation on fractal sets is developed in which the denominators of the rational approximates are restricted to lacunary sequences. The case of the standard middle third Cantor set and the sequence {3^n : n \in N} is the starting point of our investigation. Our metric results for this simple setup answers a problem raised by Mahler. As with all 'good' problems - its solution opens up a can of worms.
We study the steady state of a finite XX chain coupled at its boundaries to quantum reservoirs made of free spins that interact one after the other with the chain. The two-point correlations are calculated exactly and it is shown that the steady state is completely characterized by the magnetization profile and the associated current. Except at the boundary sites, the magnetization is given by the average of the reservoirs' magnetizations. The steady state current, proportional to the difference in the reservoirs' magnetizations, shows a non-monotonous behavior with respect to the system reservoir coupling strength, with an optimal current state for a finite value of the coupling. Moreover, we show that the steady state can be described by a generalized Gibbs state.
Strong thermodynamical arguments exist in the literature which show that the entropy S of say a many-body Hamiltonian system should be extensive (i.e., S(N)~N) independently from the range of the interactions between its elements. If the system has short-range interactions, an additive entropy, namely the Boltzmann-Gibbs one, makes the job. For long-range interactions, nonergodicity and strong correlations are generically present, and nonadditive entropies become necessary to preserve the desired entropic extensivity. These and recently related points (q-Fourier transform, large-deviation theory, nonlinear quantum mechanics) will be briefly presented. BIBLIOGRAPHY: (i) J.S. Andrade Jr., G.F.T.da Silva, A.A. Moreira, F.D. Nobre and E.M.F. Curado, Phys. Rev. Lett. 105, 260601 (2010); (ii) F.D. Nobre, M.A. Rego-Monteiro and C. Tsallis, Phys. Rev. Lett. 106, 140601 (2011); (iii) http://tsallis.cat.cbpf.br/biblio.htm
Direct numerical continuation in physical experiments is made possible by the combination of ideas from control theory and nonlinear dynamics, resulting in a family of methods known as control-based continuation. This family of methods allows both stable and unstable periodic orbits to be tracked through bifurcations such as a fold by varying suitable system parameters. As such, the intricate details of the bifurcation structure of a physical experiment can be investigated. In its original form control-based continuation was based on the Pyragas' time-delayed feedback control strategy, suitably modified to overcome the stability issues that occur in the vicinity of a saddle-node bifurcation (fold). It has since become a much more general methodology.
There are a wide range of possible applications for such investigations across engineering and the applied sciences. Specifically, there is a great deal of promise in combining such methods with ideas such as numerical substructuring, whereby a numerical model is coupled to a physical experiment in real-time via actuators and sensors.
The basic scheme (known as control-based continuation) works with standard numerical methods; however, the results are sub-optimal due to the comparative expense of making an experimental observation and the inherent noise in the measurement. This talk will present the current state-of-the-art and possibilities for future research in this area, from the development of numerical methods and control-strategies to more fundamental dynamical systems research.
The theory of large deviations is at the heart of recent progress in the field of statistical physics. I will discuss in this talk some developments that are interesting for non-equilibrium physics. In particular, I will insist on symmetries of large deviations and on analytical large deviation results.
The sociological notion of F-formations denote the spatial configurations that people assume in social interactions; and an F-formation system denotes all the behavioural aspects that go into establishing and sustaining an F-formation between people. Kendon (1990) identified some of the geometrical aspects of such F-formations that have to do with the spatial positions and orientations of interlocutors. In this talk, I will be presenting some of our two-dimensional and three-dimensional simulations that are based on Kendon's geometrical aspects of F-formations. Discussions will also extend to the evaluations carried out on the simulations by participants during a pilot study, their outcomes and implications.
Bibliography:
Kendon A. Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge: Cambridge University Press, 1990.
Energy landscape methods make use of the stationary points of the energy function of a system to infer some of its collective properties. Recently this approach has been applied to equilibrium phase transitions, showing that a connection between some properties of the energy landscape and the occurrence of a phase transition exists, at least for certain simple models.
I will discuss the study of the energy landscape of classical O(n) models defined on regular lattices and with ferromagnetic interactions. This study suggests an approximate expression for the microcanonical density of states of the O(n) models in terms of the energy density of the Ising model. If correct, this would implies the equivalence of the critical values of the energy densities of a generic O(n) model and the n=1 case, i.e., a system of Ising spins with the same interactions. Numerical and analytical results are in good agreement with such prediction.
The relation between quantum systems and their classical analogues is a subtle matter that has been investigated since the early days of quantum mechanics. Today, we have at our disposal powerful tools to formulate in a precise way the semi-classical limit. The understanding of quantum classical correspondence is of importance (a) for the interpretation and practical understanding of quantum effects, and (b) as a basis of a variety of simulation methods for quantum spectra and dynamics. Recently, there has been a growing interest in so-called non-Hermitian or "complexified" quantum theories. Applications include (i) decay, (ii) transport and scattering phenomena, (iii) dissipative systems, and (iv) PT-symmetric theories. In this talk I will present an overview of some of the issues and novelties arising in the investigation of the classical analogues of such "complexified” quantum theories, with applications ranging from optics to cold atoms and Bose-Einstein condensates.
The problem of convergence to equilibrium for diffusion processes is of theoretical as well as applied interest, for example in nonequilibrium statistical mechanics and in statistics, in particular in the study of Markov Chain Monte Carlo (MCMC) algorithms. Powerful techniques from analysis and PDEs, such as spectral theory and functional inequalities (e.g. logarithmic Sobolev inequalities) can be used in order to study convergence to equilibrium. Quite often, the diffusion processes that appear in applications are degenerate (in the sense that noise acts directly to only some of the degrees of freedom of the system) and/or nonreversible. The study of convergence to equilibrium for such systems requires the study of non-selfadjoint, possibly non-uniformly elliptic, second order differential operators. In this talk we will prove exponentially fast convergence to equilibrium for such diffusion processes using the recently developed theory of hypocoercivity. Furthermore, we will show how the addition of a nonreversible perturbation to a reversible diffusion can speed up convergence to equilibrium. This is joint work with M. Ottobre, K. Pravda-Starov, T. Lelievre and F. Nier.
Research in the field of relativistic quantum information aims at finding ways to process information using quantum systems taking into account the relativistic nature of spacetime. Cutting edge experiments in quantum information are already reaching regimes where relativistic effects can no longer be neglected. Ultimately, we would like to be able to exploit relativistic effects to improve quantum information tasks. In this talk, we propose the use of moving cavities for relativistic quantum information processing. Using these systems, we will show that non-uniform motion can change entanglement affecting quantum information protocols such as teleportation between moving parties. Via the equivalence principle, our results also provide a model of entanglement generation by gravitational effects.
The quantification of the complexity of networks is, today, a fundamental problem in the physics of complex systems. A possible roadmap to solve the problem is via extending key concepts of statistical mechanics and information theory to networks. In this talk we discuss recent works defining the Shannon entropy of a network ensemble and evaluating how it relates to the Gibbs and von Neumann entropies of network ensembles. The quantities we introduce here play a crucial role for the formulation of null models of networks through maximum-entropy arguments and contribute to inference problems emerging in the field of complex networks.
The interactions between the components of complex networks are often directed. Proper modeling of such systems frequently requires the construction of ensembles of directed graphs with a given sequence of in- and out-degrees. Previous algorithms used to generate such samples have either unknown mixing times, or lead often to unacceptably many rejections due to self-loops and multiple edges. I will present a method that can directly construct all possible directed realizations of a given degree sequence. This method is rejection-free, guarantees the independence of the constructed samples, and allows the calculation of statistical averages of network observables according to a uniform or otherwise chosen distribution.
The interactions between the components of complex networks are often directed. Proper modeling of such systems frequently requires the construction of ensembles of directed graphs with a given sequence of in- and out-degrees. Previous algorithms used to generate such samples have either unknown mixing times, or lead often to unacceptably many rejections due to self-loops and multiple edges. I will present a method that can directly construct all possible directed realizations of a given degree sequence. This method is rejection-free, guarantees the independence of the constructed samples, and allows the calculation of statistical averages of network observables according to a uniform or otherwise chosen distribution.
It is well-known from Crauel and Flandoli (Additive noise destroys a pitchfork bifurcation, J. Dyn. & Diff. Eqs 10 (1998), 259-274) that adding noise to a system with a deterministic pitchfork bifurcation yields a unique random attracting fixed point with negative Lyapunov exponent for all parameters. Based on this observation, they conclude that the deterministic bifurcation is destroyed by the additive noise. However, we show that there is qualitative change in the random dynamics at the bifurcation point in the sense that, after the bifurcation, the Lyapunov exponent cannot be observed almost surely in finite time. We associate this bifurcation with a breakdown of both uniform attraction and equivalence under uniformly continuous topological conjugacies, and with non-hyperbolicity of the dichotomy spectrum at the bifurcation point. This is joint work with Mark Callaway, Jeroen Lamb and Doan Thai Son (all at Imperial College London).
The talk will be about 'contractive Markov systems' - a generalisation of an iterated function system. Under a 'contraction-on-average' condition, such systems have a unique invariant measure. By studying how the spectral properties of a certain linear operator acting on an appropriate function space perturb, we will discuss the stochastic stability of this invariant measure and other probabilistic results.
In this talk I will present a dynamical system called Fictitious Play Dynamics. This is a basic learning algorithm from Game Theory, modelling learning behaviour of players repeatedly playing a game. Dynamically, it can be described as a non-smooth (continuous and piecewise linear) flow on the three-sphere, with global sections whose first return maps are continuous, piecewise affine and area-preserving. I will show how these systems give rise to very intricate behaviour and how they can be studied via a family of rather simple planar piecewise affine maps.
Motivated by the classification of nonequilibrium steady states suggested by R. K. P. Zia and B. Schmittmann (J. Stat. Mech. 2007 P07012), I propose to measure the violation of the detailed balance criterion by the p norm of the matrix formed by the probability currents. Its asymptotic analysis for the totally asymmetric simple exclusion process motivates the definition of a 'distance' from equilibrium. In addition, I show that the latter quantity and the average activity are both related to the probability distribution of the entropy production. Finally, considering the open asymmetric simple exclusion process and open zero-range process, I show that the current of particles gives an exact measure of the violation of detailed balance.
A complex system in science and technology can often be represented as a network of interacting subsystems or subnetworks. If we follow a reductionist approach, it is natural (though not always wise!) to attempt to describe the dynamics of the network in terms of the dynamics of the subsystems of the network. Put another way, we often have a reasonable understanding of the "pieces", but how do they fit together, and what do they do collectively? In the simplest, and most studied cases, the subnetworks all run on the same clock (are updated simultaneously), and dynamics is governed by a fixed set of (usually analytic) dynamical equations: we say the network is synchronous (this is classical dynamics).
In biology, especially neuroscience, and technology, for example large distributed systems, these assumptions may not hold: components may run on different clocks, there may be switching between different dynamical equations, and most significantly, and quite unlike what happens in a classical synchronous network, component parts of the network may run independently of the rest of the network, and even stop, for periods of time. We say networks of this type are asynchronous.
It is a major challenge to develop the mathematical theory of dynamics on asynchronous networks. In this talk, we describe examples of dynamics on synchronous and asynchronous networks and point out how properties such as switching are forced by an asynchronous structure. We also indicate relationships with random dynamical systems and problems related to "qualitative computing" .
We will present some recent results on the energetic cost of information processing in the framework of stochastic thermodynamics. This theory provides a consistent description of non-equilibrium processes governed by a Markovian dynamics. We shall discuss the physical role of the information exchange during measure, feedback and erasure for systems driven by an external controller. We will also address the issue of quantifying the thermodynamic cost of sensing for autonomous two-component systems and discuss the connection between dissipation and information-theoretic correlation.
(Joint work with Matthew Urry)
We consider the problem of learning a function defined on the nodes of a graph, in a Bayesian framework with a Gaussian process prior. We show that the relevant covariance kernels have some surprising properties on large graphs, in particular as regards their approach to the limit of full correlation of the function values across all nodes.
Our main interest is in predicting the learning curves, i.e. the typical generalization error given a certain number of examples. We describe an approach for deriving these predictions that becomes exact in the limit of large random graphs. The validity of the method is broad and covers random graphs specified by arbitrary degree distributions, including the power-law distributions typical of social and other networks. We also discuss the effects of normalization of the covariance kernels. These are more intricate than for functions of real input variables, because of the variation in local connectivity structure on a graph. Time permitting, recent extensions to the case of learning with a mismatched prior will be covered.
We introduce a framework for compressing complex networks into powergraphs with overlapping powernodes. The most compressible components of a given network provide a highly informative sketch of its overall architecture. In addition this procedure also gives rise to a novel, link-based definition of overlapping node communities in which nodes are defined by their relationships with sets of other nodes, rather than through connections within the community. We show that this approach yields valuable insights into the large-scale structure of transcription networks, food webs, and social networks, and allows for novel ways in which network architecture can be studied, defined and classified. Furthermore, when paired with enrichment analysis of node classification terms, this method can provide a concise overview of the dominant conceptual relationships that define the network.
Over the past decade complex networks have come be be recognized as powerful tools for the analysis of complex systems. The defining feature of complexity is emergence; complex systems exhibit phenomena that do not originate in the parts of the system, but rather in their interactions. The underlying structural and dynamical properties behind these phenomena are therefore, almost by definition, delocalized across the network. But, a major driving force of network theory is the hope that we can nevertheless trace these properties back to localized structures in the network. In other words, we study global network-wide phenomena but often search for the magical red arrow that points at a certain part of the network and says 'This causes it!'.
In this talk I focus on analytical investigation of network dynamics, where the network is considered as a large dynamical system. Combining approaches from dynamical systems theory and statistical physics with insights from network research analytical progress in the investigation of these systems can be made. I show that network dynamics is generally inherently nonlocal, but also point out a fundamental reason why many important real world phenomena can nevertheless be understood by a local dynamical analysis.
Many networks have cohesive groups of nodes called "communities". The study of community structure borrows ideas from many
areas, and there exist myriad methods to detect comminities algorithmically. Community structure has also been insightful in many applications, as it can reveal social organization in friendship networks, groups of simultaneously active brain regions in functional brain networks, and more. My collaborators and I have been very active in studying community structure, and I will discuss some of our work on both methodological development and applications. I'll include examples from subjects like social networks, brain networks, granular materials, and more.
Abstract (Short): I will review ideas to approach the Graph Isomorphism Problem with tools linked to Quantum Information.
I this seminar I will discuss two distinct approaches to the structure of the world around us. In the first I'll discuss our implementation of a battery of thousands of signal processing tools as part of an attempt to organize our methods and to perform a sky-survey of types of dynamics. In the second I'll cover our work connecting topics in network analysis to parameterized complexity and outline how the complexity of some routing tasks on graphs scales with the number of communities rather than the number of nodes.
The immune system can recall and execute a large number of memorized defense strategies in parallel. The explanation for this ability turns out to lie in the topology of immune networks. We studied a statistical mechanical immune network model with `coordinator branches' (T-cells) and `effector branches' (B-cells), and show how the finite connectivity enables the system to manage an extensive number of immune clones simultaneously, even above the percolation threshold. The model is solvable using replica techniques, in spite of the fact that the network has an extensive number of short loops.
Adaptive networks are models of complex systems in which the structure of the interaction network changes on the same time-scale as the status of the nodes. For instance, consider the spread of a disease over a social network that is changing as people try to avoid the infection. In this talk I will try to persuade you that demographic noise (random fluctuations arising from the discrete nature of the components of the network) plays a major role in determining the behaviour of these models. These effects can be studied analytically by employing a reduced-dimension Markov jump process as a proxy.
The inclusion process is a driven diffusive system which exhibits a
condensation transition in certain scaling limits, where a fraction of
all particles condenses on a single lattice site. We study the dynamics
of this phenomenon, and identify all relevant dynamical regimes and
corresponding time scales as a function of the system size. This
includes a coarsening regime where clusters move on the lattice and
exchange particles, leading to a growing average cluster size. Suitable
observables exhibit a power law scaling in this regime before they
saturate to stationarity following an exponential decay depending on the
system size. For symmetric dynamics we have rigorous results on finite
lattices in the limit of infinitely many particles (joint work with
Frank Redig and Kiamars Vafayi). We have further heuristic results on
one-dimensional periodic lattices in the thermodynamic limit, covering
totally asymmetric and symmetric dynamics (joint work with Jiarui Cao
and Paul Chleboun), and preliminary results for a generalized version of
the symmetric process that exhibits finite time blow-up (joint work with
Yu-Xi Chau).
Why are large, complex ecosystems stable? For decades it has been conjectured that they have some unidentified structural property. We show that trophic coherence -- a hitherto ignored feature of food webs which current structural models fail to reproduce -- is significantly correlated with stability, whereas size and complexity are not. Together with cannibalism, trophic coherence accounts for over 80% of the variance in stability observed in a 16-food-web dataset. We propose a simple model which, by correctly capturing the trophic coherence of food webs,
accurately reproduces their stability and other basic structural features. Most remarkably, our model shows that stability can increase with size and complexity. This suggests a key to May’s Paradox, and a range of opportunities and concerns for biodiversity conservation.
Recently models of evolution have begun to incorporate structured populations, including spatial structure, through the modelling of evolutionary processes on graphs (evolutionary graph theory). We shall start by looking at some work on quite simple graphs. One limitation of this otherwise quite general framework, however, is that interactions are restricted to pairwise ones, through the edges connecting pairs of individuals. Yet many animal interactions can involve many players, and theoretical models also describe such multi-player interactions. We shall discuss a more general modelling framework of interactions of structured populations with the focus on competition between territorial animals, where each animal or animal group has a "home range" which overlaps with a number of others, and interactions between various group sizes are possible. Depending upon the behaviour concerned we can embed the results of different evolutionary games within our structure, as occurs for pairwise games such as the prisoner’s dilemma or the Hawk-Dove game on graphs. We discuss some examples together with some important differences between this approach and evolutionary graph theory.
The surface drawn by a potential energy function, which is usually a multivariate nonlinear function, is called the potential energy landscape (PEL) of the given Physical/Chemical system. The stationary points of the PEL, where the gradient of the potential vanishes, are used to explore many important Physical and Chemical properties of the system. Recently, we have employed the numerical algebraic geometry (NAG) method to study the stationary points of the PELs of various models arising from Physics and Chemistry and have discovered their many interesting characteristics. In this talk, I will mention some of these results after giving a very brief introduction to the NAG method. I will then go on discussing our latest adventure: exploring the PELs of random potentials with NAG, which will address not only one of a classic problems in Algebraic Geometry but will also find numerous applications in different areas such as String Theory, Statistical Physics, Neural Networks, etc.
First we discuss general fractal and critical aspects of the brain as indicated by recent fMRI analysis. We then turn to the analysis of EEG signals from the brain of musicians and listeners during performance of improvised and non-improvised classical music. We are interested in differences between the response to the two different ways of playing music. We use measures of information flow to try to pin point differences in the structure of the network constituted by all the EEG electrodes of all musicians and listeners.
A polymer grafted to a surface exerts pressure on the substrate. Similarly, a surface-attached vesicle exerts pressure on the substrate. By using directed walk models, we compute the pressure exerted on the surface for grafted polymers and vesicles, and the effect of surface binding strength and osmotic pressure on this pressure.
Who are the most influential players in a social network? What's the origin of an epidemic outbreak? The answer to simple questions like these can hide incredibly difficult computational problems, that require powerful methods for the inference, optimization, and control of dynamical processes on large networks.
I will present a statistical mechanics approach to inverse dynamical problems in the idealized framework provided by simple models of irreversible contagion and diffusion on networks (linear threshold model, susceptible-infected-removed epidemic model). Using the cavity method (belief propagation), it is possible to explore the large-deviation properties of these dynamical processes, and develop efficient message-passing algorithms to solve optimization and inference problems even on large networks.
An analytical solution for a network growth model of intrinsic vertex fitness is presented, along
with a proposal to a new paradigm in fitness based network growth models. This class of models
is classically characterised by a fitness linking mechanism that governs the attachment rate of new
links to existing nodes and a distribution of node fitness, that measures the attractiveness of a node.
It is argued in the present paper, that this distinction is unnecessary, instead linking propensity of
nodes can be expressed in terms of a ranking among existing nodes, which reduces the complexity
of the problem. At each time-step of this dynamical model either a new node joins the network and
is attached to one of the existing nodes or a new edge is added between two existing nodes with
probability proportional to the nodes attractiveness. The full analytic theory connecting the fitness
distribution, the linking function, and the degree distribution is constructed. Given any two of these
characteristics, the third one can be determined in closed form. Furthermore additional statistics
are computed to fully describe every aspect of this network model. One particularly interesting
finding is that for a factorisable, and not necessarily symmetric linking function, very restrictive
assumptions on the exact form of the linking function need to be imposed to find a power-law
degree distribution within this class of models.
Dangerous damage to mitochondrial DNA (mtDNA) between generations is ameliorated through a stochastic developmental process called the mtDNA bottleneck. The mechanism by which this process occurs is debated mechanistically and lacks quantitative understanding, limiting our ability to prevent the inheritance of mtDNA disease. We address this problem by producing a new, physically motivated, generalisable theoretical model for cellular mtDNA populations during development. This model facilitates, for the first time, a rigorous statistical treatment of experimental data on mtDNA during development, allowing us to resolve, with quantifiable confidence, the mechanistic question of the bottleneck. The mechanism with most statistical support involves random turnover of mtDNA with binomial partitioning at cell divisions and increased turnover during folliculogenesis. We analytically solve the equations describing this mechanism, obtaining closed-form results for all mtDNA and heteroplasmy statistics throughout development, allowing us to explore the effects of potential sampling strategies and dynamic interventions for the bottleneck. We find that increasing mtDNA degradation during the bottleneck may provide a general therapeutic target to address mtDNA disease. Our theoretical advances thus allow the first rigorous statistical analysis of data on the bottleneck, resolving and obtaining analytic results for its debated mechanism and suggesting clinical strategies to assess and prevent the possibility of inherited mtDNA disease.
Cultural change is often quantified by changes in frequency of cultural traits over time. Based on those (observable) frequency patterns researchers aim to infer the nature of the underlying evolutionary processes and therefore to identify the (unobservable) causes of cultural change. Especially in archaeological and anthropological applications this inverse problem gains particular importance as occurrence or usage frequencies are often the only available information about past cultural traits or traditions and the forces affecting them. In this talk we start analyzing the described inference problem and discuss it in the context of the question of which learning strategies human populations should deploy to be well-adapted to changing environmental conditions. To do so we develop a mathematical framework which establishes a causal relationship between changes in frequency of different cultural traits and the considered underlying evolutionary processes (in our case learning strategies). Besides gaining theoretical insights into the question of which learning strategies lead to efficient adaptation processes in changing environments we focus on ‘reverse engineering’ conclusions about the learning strategies deployed in current or past population, given knowledge of the frequency change dynamic over space and time. Using appropriate statistical techniques we investigate under which conditions population-level characteristics such as frequency distributions of cultural variants carry a signature of the underlying evolutionary processes and if this is the case how much information can be inferred from it. Importantly, we do not expect the existence of a unique relationship between observed frequency data and underlying evolutionary processes; to the contrary, we suspect that different processes can produce similar frequency pattern. However, our approach might help narrow down the range of possible processes that could have produced those observed frequency patterns, and thus still be instructive in the face of uncertainty. Rather than identifying a single evolutionary process that explains the data, we focus on excluding processes that cannot have produced the observed changes in frequencies. In the last part of the talk, we demonstrate the applicability of the developed framework to anthropological case studies.
In April 2010 I gave a seminar at the Santa Fe Institute where I demonstrated that certain classic problems in economics can be resolved by re-visiting basic tenets of the formalism of decision theory. Specifically, I noted that simple mathematical models of economic processes, such as the random walk or geometric Brownian motion, are non-ergodic. Because of the non-stationarity of the processes, observables cannot be assumed to be ergodic, and this leads to a difference in important cases between time averages and ensemble averages. In the context of decision theory, the former tend to indicate how an individual will fare over time, while the latter may apply to collectives but are a priori meaningless for individuals. The effects of replacing expectation values by time averages are staggering -- realistic predictions for risk aversion, market stability, and economic inequality follow directly. This observation led to a discourse with Murray Gell-Mann and Kenneth Arrow about the history and development of decision theory, where the first studies of stochastic systems were carried out in the 17th century, and its relation to the development of statistical mechanics where refined concepts were introduced in the 19th century. I will summarize this discourse and present my current understanding of the problems.
Interacting self-avoiding walks as models for polymer collapse in dilute solution have been studied for many years. The canonical model, also known as the Theta model, is rather well understood, and it was expected that all models with short-range attractive interactions between “monomers” would give the same behaviour as the Theta model. In recent years a variety of models have been studied which do not conform to this expectation, and the observed behaviour depends on the specifics of the interaction and lattice.
In this talk I will review some of the known or conjectured results for these models, with particular attention to the self-avoiding trails and vertex-interacting self-avoiding walk models, and show how these models may be studied using extended transfer matrix methods (transfer matrices, DMRG and CTMRG methods). I will also present some results for the complex zeroes of the partition function as a method for finding critical points and estimates of the cross-over exponents for walk models.
In the past few years, multilayer, interdependent and multiplex networks have quickly become a big avenue in mathematical modelling of networked complex systems, with applications in social sciences, large-scale infrastructures, information and communications technology, neuroscience, etc. In particular, it has been shown that such networks can describe the resilience of large coupled infrastructures (power grids, Internet, water systems, …) to failures, by studying percolation properties under random damage.
Percolation is perhaps the simplest model of network resilience and can be defined or extended to multiplex networks (defined as a network with multiple edge types) in many different ways. In some cases, new analytical approaches must be introduced to include features that are intrinsic to multiplex networks. In other cases, extensions of classical models give origin to new critical phenomena and complex behaviours.
Regarding the first case, I will illustrate a new theoretical approach to include edge overlap in a simple percolation
model. Edge overlap, i.e. node pairs connected on different layers, is a feature common to many empirical cases,
such as in transportation networks, social networks and epidemiology. Our findings illustrate properties of
multiplex resilience to random damage and may give assistance in the design of large-scale infrastructure.
Regarding the second aspect, I will present models of pruning and bootstrap percolation in multiplex networks. Bootstrap may be seen as a simple activation process and has applications in many areas of science. Our extension to multiplex networks can be solved analytically, has potential applications in network security, and provides a step in dealing with dynamical processes occurring on the network.
Organisers: Leon Danon and Rosemary J. Harris
Complex systems theory has played an increasingly important role in infectious disease epidemiology. From the fundamental basis of transmission between two interacting individuals, complexity can emerge at all scales, from small outbreaks to global pandemics. Traditional ODE models rely on simplistic characterisations of interactions and transmission, but as more and more data become available these are no longer necessary. The descriptive and predictive power of transmission models can be improved by statistical descriptions of behaviour and movement of individuals, and tools from complex systems contribute greatly to the discussion.
This workshop will cover advances in mathematical epidemiology that have been shaped by complex systems approaches. The workshop is intended to cover a broad spectrum of topics, from theoretical aspects of transmission on networks to current work shaping public policy on diseases of livestock and honey bees.
Attendance at this workshop is free and open to everyone. However, for catering purposes, please register your attendance via email to l.danon@qmul.ac.uk(link sends e-mail) or rosemary.harris@qmul.ac.uk(link sends e-mail) by 21st March.
The meeting is part of the CoSyDy series, a London Mathematical Society Scheme 3 network bringing together UK mathematicians investigating Complex Systems Dynamics. Travel support is available for participants from the member nodes.
Schedule:
11:30-12:10 | Vincent Jansen, Royal Holloway, University of London |
Rats, Fleas and the Tip of the Tongues: Modelling the Epidemiology of the Plague | |
12:10-12:40 | Jon Read, University of Liverpool |
Mobility, social encounter patterns and influenza exposure in Southern China | |
12:40-13:30 | Buffet Lunch |
13:30-14:10 | Frank Ball, University of Nottingham |
Epidemics on random networks with tunable clustering, degree correlation and degree distribution [PDF 20KB] | |
14:10-14:40 | Kieran Sharkey, University of Liverpool |
Prevalence, invasion and duality for SIS dynamics on finite Networks | |
14:40-15:10 | Helen Johnson, London School of Hygiene and Tropical Medicine |
Keeping it Real: Calibration and Parametric Inference for Complex Epidemic Models [PDF 17KB] | |
15:10-15:40 | Tea and Coffee |
15:40-16:20 | Rowland Kao, University of Glasgow |
Supersize me: how big data and whole genome sequencing are transforming epidemiology | |
16:20-16:50 | Mike Tildesley, University of Exeter |
Mathematical Modelling of Infectious Diseases in the Presence of Uncertainty | |
16:50-17:20 | Samik Datta, University of Warwick |
Modelling the spread of disease in honeybees | |
17:20- | Drinks and Discussion |
All talks will now be in the Maths Lecture Theatre of the Mathematics Building. The full programme is also available as a pdf attachment below.
Attachment | Size |
---|---|
cosydy-qmul2014plan.pdf [PDF 63KB] | 63.24 KB |
The Brauer loop model is an integrable lattice model based on the Brauer
algebra, with crossings of loops allowed. The ground state of the
transfer matrix is calculable (with some caveats) via the quantum
Knizhnik--Zamolodchikov (qKZ) equation, a technique that expresses the
ground state components in terms of each other. This method has been
used frequently for lattice models of this type.
In 2005 de Gier and Nienhuis noticed a connection between the ground
state of the periodic Brauer loop model and the degrees of the
irreducible components of a certain algebraic scheme as calculated by
Knutson in 2003. This connection was explored further by Di Francesco
and Zinn-Justin in 2006, and proved shortly thereafter by Knutson and
Zinn-Justin. The irreducible components can be labelled by the basis
elements of the ground state, and the final proof involves showing that
the multidegrees (an extension of the concept of polynomial degree) of
these irreducible components also satisfy the qKZ equation. This
connection seems similar in spirit to the connection between integrable
models and combinatorics, but is much less explored.
Nonlinear dynamics of neuron-neuron interaction via complex networks lie at the base of all brain activity. How such inter-cellular communication gives rise to behavior of the organism has been a long-standing question. In this talk, we first explore the evidence for the occurrence of such mesoscopic structures in the nervous system of the nematode C. elegans and in the macaque cortex. Next, we look at their possible functional role in the brain. We also consider the attractor network models of nervous system activity and investigate howmodular structures affect the dynamics of convergence to attractors. We conclude with a discussion of the general implications of our results for basin size of dynamical attractors in modular networks whose nodes have threshold-activated dynamics. As such networks also appear in the context of intra-cellular signaling, our results may provide a glimpse of a universal (i.e., scale-invariant) theory for information processing dynamics in biology.
Membranes or membrane-like materials play an important role in many fields ranging from biology to physics. These systems form a very rich domain in statistical physics. The interplay between geometry and thermal fluctuations lead to exciting phases such flat, tubular and disordered flat phases. Membranes can be divided into two group : fluid membranes in which the molecules are free to diffuse and thus no shear modulus. On the other hand, in polymerized membranes the connectivity is fixed which leads to elastic forces. This difference etween fluid and polymerized membranes leads to a difference in their critical behaviour. For instance, fluid embranes are always crumpled, whereas polymerized membranes exhibit a phase transition between a crumpled phase and a flat phase. In this talk, I will focus only on polymerized phantom, i.e. non-self-avoiding, membranes. The critical behaviour of both isotropic and anisotropic polymerized membranes are studied using a nonperturbative renormalization group approach (NPRG). This allows for the investigation of the phase transitions and the low temperature flat phase in any internal dimension D and embedding d. Interestingly, from the point of view of its mechanical properties, graphene identifies with the flat phase.
When driven out of equilibrium by a temperature gradient, fluids respond by developing a nontrivial, inhomogeneous structure according to the governing macroscopic laws. Here we show that such structure obeys strikingly simple universal scaling laws arbitrarily far from equilibrium, provided that both macroscopic local equilibrium (LE) and Fourier’s law hold. These results, that we prove for hard sphere fluids and more generally for systems with homogeneous potentials in arbitrary dimension, are likely to remain valid in the much broader family of strongly correlating fluids where excluded volume interactions are dominant. Extensive simulations of hard disk fluids confirm the universal scaling laws even under strong temperature gradients, suggesting that Fourier’s law remains valid in this highly nonlinear regime, with the expected corrections absorbed into a non-linear conductivity functional. Our results also show that macroscopic LE is a very strong property, allowing us to measure the hard disks equation of state in simulations far from equilibrium with a surprising accuracy comparable to the best equilibrium simulations. Subtle corrections to LE are found in the fluctuations of the total energy which strongly point out to the non-locality of the nonequilibrium potential governing the fluid’s macroscopic behavior out of equilibrium. Finally, our simulations show that both LE and the universal scaling laws are robust in the presence of strong finite-size effects, via a bulk-boundary decoupling mechanism by which all sorts of spurious finite-size and boundary corrections sum up to renormalize the effective boundary conditions imposed on the bulk fluid, which behaves macroscopically.
The topological entropy is a measure of the complexity
of a map. In this talk I will explain this notion in some detail and
report on a recent result with H.H. Rugh on the regularity of the
topological entropy of interval maps with holes as a function of the hole
position and size.
Many complex systems are characterised by distinct types of
interactions among a set of elementary units, and their structure can
be thus better modelled by means of multi-layer networks. A
fundamental open question is then how many layers are really necessary
to accurately represent a multi-layered complex system. Drawing on the
formal analogy between quantum density operators and the normalised
Laplacian of a graph, we develop a simple framework to reduce the
dimensionality of a multiplex network while minimizing information
loss. We will show that the number of informative layers in some
natural, social and collaboration systems can be substantially
reduced, while multi-layer engineered and transportation systems, for
which the redundancy is purposedly avoided in order to maximise their
efficiency, are essentially irreducible.
(Work in collaboration with F. Font-Clos, G. Pruessner, A. Deluca)
When analysing time series it is common to apply thresholds. For example, this could be to eliminate
noise coming from the resolution limitations of measuring devices, or to focus on extreme events in the case
of high thresholds. We analyse the effect of applying a threshold to the duration time of a birth-death
process. This toy model allows us to work out the form of the duration time density in full detail. We find
that duration times decay with random walk exponent -3/2 for `short' times, and birth-death exponent -2
for `long' times, where short and long are characterised by a threshold-imposed timescale. For sparse data
the ultimate -2 exponent of the underlying (multiplicative) process may never be observed. This may have
implications for real-world data in the interpretation of threshold-specific decay exponents.
I will discuss the mean field kinetics of irreversible coagulation in
the presence of a source of monomers and a sink at large cluster sizes
which removes large particles from the system. These kinetics are
described by the Smoluchowski coagulation equation supplemented with
source and sink terms. In common with many driven dissipative systems with
conservative interactions, one expects this system to reach a stationary
state at large times characterised by a constant flux of mass in the
space of cluster sizes from the small-scale source to the large scale sink.
While this is indeed the case for many systems, I will present here a
class of systems in which this stationary state is dynamically unstable.
The consequence of this instability is that the long-time kinetics are
oscillatory in time. This oscillatory behaviour is caused by the fact that
mass is transferred through the system in pulses rather than via a stationary
current in such a way that the mass flux is constant on average. The
implications of this unusual behaviour the non-equilibrium kinetics of
other systems will be discussed.
Contemporary finance is characterized by a complex pattern of relations between financial institutions that can be conveniently modeled in terms of networks.
In stable market conditions, connections allow banks to diversify their investments and reduce their individual risk. The same networked structure may, however, become a source of contagion and stress amplification when some banks go bankrupt.
We consider a network model of financial contagion due to the combination of overlapping portfolios and market-impact, and we show how it can be understood in terms of a generalized branching process. We estimate the circumstances under which systemic instabilities are likely to occur as a function of parameters such as leverage, market crowding and diversification.
The analysis shows that the probability of observing global cascades of bankruptcies is a non-monotonic function of both the average diversification of financial institutions, and that there is a critical threshold for leverage below which the system is stable. Moreover the system exhibits "robust yet fragile'' behavior, with regions of the parameter space where contagion is rare but catastrophic whenever it occurs.
In this talk - which will be accessible to a general audience - we show how the asymptotic behavior of random networks gives rise to universal statistical summaries. These summaries are related to concepts that are well understood in the other contexts - such as stationarity and ergodicity - but whose extension to networks requires recent developments from the theory of graph limits and the corresponding analog of de Finetti's theorem. We introduce a new tool based on these summaries, which we call a network histogram, obtained by fitting a statistical model called a blockmodel to a large network. Blocks of edges play the role of histogram bins, and so-called network community sizes that of histogram bandwidths or bin sizes. For more details, see recent work in the Proceedings of the National Academy of Sciences (doi:10.1073/pnas.1400374111, with Sofia Olhede) and the Annals of Statistics (doi:10.1214/13-AOS1173, with David Choi).
Financial markets are complex systems with a large number of different factors contributing in an interrelated way. Complexity mainly manifests in two aspects: 1) changes in the statistical properties of financial signals when analyzed at different time-scales; 2) dependency and causality structure dynamically evolving in time. These -non-stationary- changes are more significant during periods of market stress and crises.
In this talk I’ll discuss methods to study financial market complexity from a statistical perspective. Specifically, I’ll introduce two approaches: 1) multi-scaling studies by means of novel scaling exponents and complexity measures; 2) network filtering techniques to make sense of big data.
I will discuss practical applications showing how a better understanding of market complexity can be used, in practice, to hedge risk and discover market inefficiencies.
Arnold’s cat map is a prototypical dynamical system on the torus with uniformly hyperbolic dynamics. Since the famous picture of
a scrambled cat in the 1968 book by Arnold and Avez, it has become one of the icons of chaos. In 2010, Lev Lerman studied a family of maps homotopic to the cat map that has, in addition to a saddle, a parabolic fixed point. Lerman conjectured that this map could be a prototype for dynamics with a mixed phase space, having positive measure sets of nonuniformly hyperbolic and of elliptic orbits. We present some numerical evidence that supports Lerman’s conjecture. The elliptic orbits appear to be confined to a pair of channels bounded by invariant manifolds of the two fixed points. The complement of the channels appears to be a positive measure Cantor set. Computations show that orbits in the complement have positive Lyapunov exponents.
Empirical evidence suggesting that living systems might operate in the vicinity of critical points, at the borderline between order and disorder, has proliferated in recent years, with examples ranging from spontaneous brain activity, to the dynamic of gene expression or to flock dynamics. However, a well-founded theory for understanding how and why living systems tune themselves to be poised in the vicinity of a critical point is lacking. In this talk I will review the concept of criticality with its associated scale invariance and power-law distributions. I will discuss mechanisms by which inanimate systems may self-tune to critical points and compare such phenomenology with what observed in living systems. I will also introduce the concept of Griffiths phase --an old acquaintance from the physics of disordered systems-- and show how it can be very naturally related to criticality in living structures such as the brain. In particular, taking into account the complex hierarchical-modular architecture of cortical networks, the usual singular critical pointin the dynamics of neural activity propagation is replaced by an extended critical-like region with a fascinating dynamics which might justify the trade-off between segregation and integration, needed to achieve complex cognitive functions.
In the talk I will demonstrate on specific examples the emergence of a new field, "statistical topology", which unifies topology, noncommutative geometry, probability theory and random walks. In particular, I plan to discuss the following interlinked questions: (i) how the ballistic growth ("Tetris" game) is related to random walks in symmetric spaces and quantum Toda chain, (ii) what is the optimal structure of the salad leaf in 3D and how it is related to modular functions and hyperbolic geometry, (iii) what is the fractal structure of unknotted long polymer chain confined in a bounding box and how this is related to Brownian bridges in spaces of constant negative curvature.
The use of ac fields allows one to precisely control the motion of particles in periodic potentials. We demonstrate such a precise control with cold atoms in driven optical lattices, using two very different mechanism: the ratchet effect and vibrational mechanics. In the first one ac fields drive the system away from equilibrium and break relevant symmetries, in the second one ac fields lead to the renormalisation of the potential.
This is part of a series of collaborative meetings between Bristol, Leicester, Liverpool, Loughborough, Manchester, Queen Mary, Surrey, and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
For speakers, schedule, titles, and abstracts see the meeting webpage.
We consider random quantum walks on a homogeneous tree of degree 3 describing the discrete time evolution of a quantum particle with internal degree of freedom in C^3 hopping on the neighboring
sites of the tree in presence of static disorder. The one time step random unitary evolution operator of the particle depends on a unitary matrix C in U(3) which monitors the strength of the disorder.
We show the existence of open sets of matrices in U(3) for which the random evolution has either pure point spectrum almost surely or purely absolutely continuous spectrum.
We also establish properties of the spectral diagram which provide a description of the spectral transition driven by C in U(3). This is joint work with Eman Hamza.
In this special lecture, organized within our MSc Mathematics of Networks/Network Science, Dr. Jim Webber, chief scientist at Neo Technology, will talk about how Network Science is used in industry in a daily basis, within their software Neo4j.
Jim will introduce the notion of graph databases for storing and querying connected data structures. He will also look under the covers at Neo4j's design, and consider how the requirements for correctness and performance of connected data drive the architecture. Moving up the stack, he will explore Neo4j's Cypher query language and show how it can be used to tackle complex scenarios like recommendations in minutes (with live programming, naturally!). Finally he will discuss what it means to be a very large graph database and review the dependability requirements to make such a system viable.
Everybody is welcome, and we specially invite all our MSc and PhD students to attend, as it can be an excellent forum for discussion between academia and industry.
In this talk we explore different ways to construct city boundaries and its relevance to current efforts towards a science of cities. We use percolation theory to understand the hierarchical organisation of the urban system, and look at the morphological characteristics of urban clusters for traces of optimization or universality.
The constituents of a wide variety of real-world complex systems interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Recently, the interest of the research community increased towards such systems because accounting for the "multilayer" features of those systems is a challenge. In this lecture, we will discuss several real-world examples, put in evidence their multilayer information and review the most recent advance in this new field.
We show that the mixed phase space dynamics of a typical smooth Hamiltonian system universally leads
to a sustained exponential growth of energy at a slow periodic variation of parameters. We build a model for this
process in terms of geometric Brownian motion with a positive drift, and relate it to the steady entropy increase
after each period of the parameters variation.
The use of the so-called Coulomb gas technique in Random Matrix Theory goes back to the seminal works of Wigner and Dyson. I review some modern (and not so modern!) applications of this technique, which are linked via a quite intriguing unifying thread: the appearance of extremely weak (third-order) phase transitions separating the equilibrium phases of the fluid of "eigenvalues". A particular interesting example concerns the statistics of the largest eigenvalue of random matrices, and the probability of atypical fluctuations not described by the celebrated Tracy-Widom law. Recent occurrences of this type of phase transitions in condensed matter and statistical physics problems - which have apparently very little to do with each other - are also addressed, as well as some "exceptions" or "counter-examples".
Ripening in systems where the overall aggregate volume increases due to
chemical reactions or the drift of thermodynamic parameters is a problem
of pivotal importance in the material and environmental sciences. In
the former its better understanding provides insight into controlling
nanoparticle synthesis, annealing, and aging processes. In the latter
it is of fundamental importance to improve the parametrization of mist
and clouds in weather and climate models.
I present the results of comprehensive laboratory experiments and
numerical studies addressing droplet growth and droplet size
distributions in systems where droplets grow due to sustained
supersaturation of their environment. Both, for classical theories
addressing droplets condensing on a substrate (like in dew and cooling
devices) and droplets entrained in an external flow (like in clouds and
nanoparticle synthesis) we identify severe shortcomings. I will show
that the quantitative modelling of rain formation in clouds on the one
hand and of the ageing and synthesis of nanoparticles on the other hand
face the same theoretical challenges, and that these challenges can be
addressed by adapting modern methods of non-equilibrium statistical
physics.
I will discuss methods for spatio-temporal modelling in molecular,
cell and population biology. Three classes of models will be considered:
(i) microscopic (individual-based) models (molecular dynamics,
Brownian dynamics) which are based on the simulation of
trajectories of molecules (or individuals) and their localized
interactions (for example, reactions);
(ii) mesoscopic (lattice-based) models which divide the computational
domain into a finite number of compartments and simulate the time
evolution of the numbers of molecules (numbers of individuals)
in each compartment; and
(iii) macroscopic (deterministic) models which are written in terms
of mean-field reaction-diffusion-advection partial differential
equations (PDEs) for spatially varying concentrations.
In the first part of my talk, I will discuss connections between the
modelling frameworks (i)-(iii). I will consider chemical reactions both at
a surface and in the bulk. In the second part of my talk, I will present
hybrid (multiscale) algorithms which use models with a different level
of detail in different parts of the computational domain.
The main goal of this multiscale methodology is to use a detailed
modelling approach in localized regions of particular interest
(in which accuracy and microscopic detail is important) and a less
detailed model in other regions in which accuracy may be traded
for simulation efficiency. I will also discuss hybrid modelling
of chemotaxis where an individual-based model of cells is coupled
with PDEs for extracellular chemical signals.
A famous problem of fluid dynamics is the flow around a cylindrical or spherical obstacle. At small flow velocity, a steady axisymmetric wake forms behind the obstacle; upon increasing the velocity the wake becomes longer, then asymmetric and time dependent (vortices of alternating signs are shed in the von Karman vortex street pattern), then turbulent. The question which we address is what happens if the fluid is a superfluid, such as liquid He, or an atomic Bose-Einstein condensate: in the absence of viscosity, is there a quantum analog to the classical wake ?
TBA
Linear fractional equation involving a Riemann-Liouville derivative
is the standard model for the description of anomalous subdiffusive
transport of particles. The question arises as to how to extend this fractional
equation for the nonlinear case involving particles interactions.
The talk will be concerned with the structural instability of fractional Fokker–Planck
equation, nonlinear fractional PDE's and aggregation phenomenon.
In this talk, making use of statistical physics tools, we address the specific role of randomness in financial markets, both at micro and macro level. In particular, we will review some recent results obtained about the effectiveness of random strategies of investment, compared with some of the most used trading strategies for forecasting the behavior of real financial indexes. We also push forward our analysis by means of a Self-Organized Criticality model, able to simulate financial avalanches in trading communities with different network topologies, where a Pareto-like power law behavior of wealth spontaneously emerges. In this context we present new findings and suggestions for policies based on the effects that random strategies can have in terms of reduction of dangerous financial extreme events, i.e. bubbles and crashes.
References
A.E. Biondo, A. Pluchino, A. Rapisarda, Contemporary Physics 55 (2014) 318
A.E. Biondo, A. Pluchino, A. Rapisarda, D. Helbing, Phys Rev. E 88 (2013) 062814
A.E. Biondo, A. Pluchino, A. Rapisarda, D. Helbing, (2013) PLOS ONE 8(7): e68344.
A.E. Biondo, A. Pluchino, A. Rapisarda, Journal of Statistical Physics 151 (2013) 607.
Teaching mathematical writing gives you a vivid portrait of the students' struggle with exactness and abstraction, and new tools for dealing with it. This seminar intends to stimulate a discussion on how we introduce our
students to abstract mathematics; I also hope to give a positive twist to the soul-searching that normally accompanies exam-marking.
The densest way to pack objects in space, also known as the packing problem, has intrigued scientists and philosophers for millenia. Today, packing comes up in various systems over many length scales from batteries and catalysts to the self-assembly of nanoparticles, colloids and biomolecules. Despite the fact that so many systems' properties depend on the packing of differently-shaped components, we still have no general understanding of how packing varies as a function of particle shape. Here, we carry out an exhaustive study of how packing depends on shape by investigating the packings of over 55,000 polyhedra. By combining simulations and analytic calculations, we study families of polyhedra interpolating between Platonic and Archimedean solids such as the tetrahedron, the cube, and the octahedron. Our resulting density surface plots can be used to guide experiments that utilize shape and packing in the same way that phase diagrams are essential to do chemistry. The properties of particle shape indeed are revealing why we can assemble certain crystals, transition between different ones, or get stuck in kinetic traps.
Links: http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011024(link is external),
http://www.newscientist.com/article/dn25163-angry-alien-in-packing-puzzl...(link is external),
http://physicsworld.com/cws/article/news/2014/mar/03/finding-better-ways...(link is external),
http://physics.aps.org/synopsis-for/10.1103/PhysRevX.4.011024(link is external)
My website: http://www-personal.umich.edu/~dklotsa/Daphne_Klotsas_Homepage/Home.html
Transfer operators are global descriptors of ensemble evolution under nonlinear dynamics and form the basis of efficient methods of computing a variety of statistical quantities and geometric objects associated with the dynamics.
I will discuss two related methods of identifying and tracking coherent structures in time-dependent fluid flow; one based on probabilistic ideas and the other on geometric ideas.
Applications to geophysical fluid flow will be presented.
Many state of the art music generation/improvisation systems generate music that
sounds good on a note-to-note level. However, these compositions often lack long term
structure or coherence. This problem is addressed in this research by generating music that
adheres to a structural template. A powerful variable neighbourhood search algorithm (VNS)
was developed, which is able to generate a range of musical styles based on it's
objective function, whilst constraining the music to a structural template. In the first
stage of the project, an objective function based on rules from music theory was used to
generate counterpoint. In this research, a machine learning approach is combined with the
VNS in order to generate structured music for the bagana, an Ethiopian lyre. Different
ways are explored in which a Markov model can be used to construct quality metrics that
represent how well a fragment fits the chosen style (e.g. music for bagana). This approach
allows us to combine the power of machine learning methods with optimization algorithms.
The synaptic inputs arriving in the cortex are under many circumstances
highly variable. As a consequence, the spiking activity of cortical
neurons is strongly irregular such that the coefficient of variation of
the inter-spike interval distribution of individual neurons is
approximately Poisson-like. To model this activity, balanced networks
have been put forward where a coordination between excitatory and strong
inhibitory input currents, which nearly cancel in individual neurons,
gives rise to this irregular spiking activity. However, balanced
networks of excitatory and inhibitory neurons are characterized by a
strictly linear relation between stimulus strength and network firing
rate. This linearity makes it hard to perform more complex computational
tasks like the generation of receptive fields, multiple stable activity
states or normalization, which have been measured in many sensory
cortices. Synapses displaying activity dependent short-term plasticity
(STP) have been previously reported to give rise to a non-linear network
response with potentially multiple stable states for a given stimulus.
In this seminar, I will discuss our recent analytical and numerical
analysis of computational properties of balanced networks which
incorporate short-term plasticity. We demonstrate stimuli are normalized
by the network and that increasing the stimulus to one sub-network,
suppresses the activity in the neighboring population. Thereby,
normalization and suppression are linear in stimulus strength when STP
is disabled and become non-linear with activity dependent synapses.
In this talk we study the impact that urban mobility patterns have on the onset of epidemics. We focus on two particular datasets from the cities of Medellín and Bogotá, both in Colombia. Although mobility patterns in these two cities are similar from those typically found for large cities, these datasets provide additional information about the socioeconomic status of the individuals. This information is particularly important when the level of inequality i a society is large, as it is the case in Colombia. Thus, taking advantage of this additional information we unveil the differences between the mobility patterns of these social stata to finally unveil the social hierarchy by analyzing the contagion patterns occurring during an epidemic outbreak.
Fluctuation in small systems has attracted wide interest because of the recent experimental development in biological, colloidal, and electrical systems. As accurate data on fluctuation have become accessible, the importance of mathematical modeling of fluctuation’s dynamics has been increasing. One of the minimal models for such systems is the Langevin equation, which is a simple model composed of the viscous friction and the white Gaussian noise. The validity of the Langevin model has been shown in terms of some microscopic theories [1], and this model has been used not only theoretically but also experimentally in describing thermal fluctuation.
On the other hand, non-Gaussian properties of fluctuation are reported to emerge in athermal systems, such as biological, granular, and electrical systems. A natural question then would arises: When and how does the non-Gaussian fluctuation emerge for athermal systems? In this seminar, we present a systematic method to derive a Langevin-like equation driven by non-Gaussian noise for a wide class of stochastic athermal systems, starting from master equations and developing an asymptotic expansion [2, 3]. We found an explicit condition whereby the non-Gaussian properties of the athermal noise become dominant for tracer particles associated with both thermal and athermal environments. We also derive an inverse formula to infer microscopic properties of the athermal bath from the statistics of the tracer particle. Furthermore, we obtain the full-order asymptotic formula of the steady distribution function for an arbitrary strong non-linear friction, and show that the first-order approximation corresponds to the independent kick model [4]. We apply our formulation to a granular motor under viscous and Coulombic frictions, and analytically obtain the angular velocity distribution functions. Our theory demonstrates that the non-Gaussian Langevin equation is a minimal model of athermal systems.
[1] N.G. van Kampen, Stochastic Processes in Physics and Chemistry, North-Holland (2007).
[2] K. Kanazawa, T.G. Sano, T. Sagawa, and H. Hayakawa, Phys. Rev. Lett. 114, 090601 (2015).
[3] K. Kanazawa, T.G. Sano, T. Sagawa, and H. Hayakawa, J. Stat. Phys. 160, 1294 (2015).
[4] J. Talbot, R.D. Wildman, and P. Viot, Phys. Rev. Lett. 107, 138001 (2011).
Rectification of work in non-equilibrium conditions has been one of the important topic of non-equilibrium statistical mechanics. Within the framework of equilibrium thermodynamics, it is well known that the works can be rectified from two thermal equilibrium baths. We address the question that how can we rectify work from Brownian object (piston) attached to multiple environments, including non-equilibrium baths? We focus on adiabatic piston problem under nonlinear friction, where the piston with sliding friction separates two gases of the same pressure, but different temperatures. Without sliding friction, the direction of piston motion is known to be determined from the difference of temperature of two gases [1,2]. However, if sliding friction exists, we report that the direction of motion depends on the amplitude of the friction, and nonlinearity of the friction [3]. If time allows, we also report the possibility of application to the problem of fluctuating heat engine, where the temperature of gas is changed, in a cyclic manner [4].
[1] E. H. Lieb, Physica A 263 491 (1999).
[2] Ch. Gruber and J. Piasecki, Physica A 268 412 (1999). A. Fruleux, R. Kawai and K. Sekimoto, Phys. Rev. Lett. 108 160601 (2012).
[3] T. G. Sano and H. Hayakawa, Phys. Rev. E 89 032104 (2014).
[4] T. G. Sano and H. Hayakawa, arXiv:1412.4468 (2014).
In this presentation, a generalisation of pairwise models to non-Markovian epidemics on networks is presented. For the case of infectious periods of fixed length, the resulting pairwise model is a system of delay differential equations, which shows excellent agreement with results based on stochastic simulations. Furthermore, we analytically compute a new R_0-like threshold quantity and an analytical relation between this and the final epidemic size. Additionally, we show that the pairwise model and the analytic results can be generalized to an arbitrary distribution of the infectious times, using integro-differential equations, and this leads to a general expression for the final epidemic size. By showing the rigorous link between non-Markovian dynamics and pairwise delay differential equations, we provide the framework for a more systematic understanding of non-Markovian dynamics.
Systems driven out of equilibrium experience large fluctuations of the dissipated work. The same is true for wavefunction amplitudes in disordered systems close to the Anderson localization transition. In both cases, the probability distribution function is given by the large-deviation ansatz. Here we exploit the analogy between the statistics of work dissipated in a driven single-electron box and that of random multifractal wavefunction amplitudes, and uncover new relations that generalize the Jarzynski equality. We checked the new relations theoretically using the rate equations for sequential tunnelling of electrons and experimentally by measuring the dissipated work in a driven single-electron box and found a remarkable correspondence. The results represent an important universal feature of the work statistics in systems out of equilibrium and help to understand the nature of the symmetry of multifractal exponents in the theory of Anderson localization.
I will give a gentle introduction to some recent work on the effects of long-range temporal correlations in stochastic particle models, focusing particularly on fluctuations about the typical behaviour. Specifically, in the first part of the talk, I will discuss how long-range memory dependence can modify the large deviation principle describing the probability of rare currents and lead, for example, to superdiffusive behaviour. In the second part of the talk, I will describe a more interdisciplinary project incorporating the psychological "peak-end" heuristic for human memory into a simple discrete choice model from economics.
[Sun, sea and sand(pit): This is mainly work completed during my sabbatical and partially funded/inspired by the "sandpit" grant EP/J004715/1. There may be a few pictures!]
This is part of a series of collaborative meetings between Bath, Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, Surrey, and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
For speakers, schedule, titles, and abstracts see the meeting webpage.
The talk provides an overview recent work on the analysis of von Neumann entropy, which leads to new methods for network algorithms in both the machine learning and complex network domains. We commence by presenting simple approximations for the Von Neumann entropy of both directed and undirected networks in terms of edge degree statistics. In the machine learning domain, this leads to new description length methods for learning generative models of network-structure, and new ways of computing information theoretic graph kernels. In the complex network domain, it provides a means of analysing the time evolution of networks, and making links with the thermodynamics of network evolution.
The partially directed path is a classical model in lattice path combinatorics. In this talk I will review briefly the model and explain why it is a good model for quantifying polymer entropy. If the path is confined to the space between vertical walls in a half-lattice, then it loses entropy. This loss of entropy induces an entropic force on the walls. I will show how to determine the generating and partition function of the model using the kernel method, and then compute entropic forces and pressures. In some cases the asymptotic behaviour of the entropic forces will be shown. This work was done in collaboration with Thomas Prellberg. See http://arxiv.org/abs/1509.07165
In the last years, ideas and methods from network science have been
applied to study the structure of time series, thereby building a bridge
between dynamical systems, time series analysis and graph theory. In this
talk I will focus on a particular approach, namely the family of
visibility algorithms, and will give a friendly overview of the main
results that we have obtained recently. In particular, I will focus on
several canonical problems arising in different fields such as nonlinear
dynamics, stochastic processes, statistical physics and machine learning
as well as in applied fields such as finance, and will show how these can
be mapped, via visibility algorithms, to the study of certain topological
properties of visibility graphs. If time permits, I will also present a
diagrammatic theory that allows to find some exact results on the
properties of these graphs for general classes of Markovian dynamics.
Quantum Hall states are characterised by the precise quantization of Hall conductance, the phenomenon whose geometric origin was understood early on. One of the main goals of the theory is computing adiabatic phases corresponding to various geometric deformations (associated with the line bundle, metric and complex structure moduli), in the limit of a large number of particles. We consider QH states on Riemann surfaces, and give a complete characterisation of the problem for the integer QH states and for the Laughlin states in the fractional QHE, by computing the generating functional for these states. In the integer QH our method relies on the Bergman kernel expansion for high powers of holomorphic line bundle, and the answer is expressed in terms of energy functionals in Kahler geometry. We explain the relation of geometric phases to Quillen theory of determinant line bundles, using Bismut-Gillet-Soule anomaly formulas. On the sphere the generating functional is also related to the partition function for normal random matrix ensembles for a large class of potentials. For the Laughlin states we compute the generating functional using path integral in a 2d scalar field theory.
The binary-state voter model describes a system of agents who adopt the opinions of their neighbours. The coevolving voter model (CVM, [1]) extends its scope by giving the agents the option to sever the link instead of adopting a contrarian opinion. The resulting simultaneous evolution of the network and the configuration leads to a fragmentation transition typical of such adaptive systems. The CVM was our starting point for investigating coevolution in the context of multilayer networks, work that IFISC was tasked with under the scope of the LASAGNE Initiative. In this talk I will briefly review some of the outcomes and follow-up works. First we will see how coupling together of two CVM networks modifies the transitions and results in a new type of fragmentation [2]. I will then identify the latter with the behaviour of the single-network CVM with select nodes constantly under the stress of noise [3]. Finally, I will relate our attempts to reproduce the effect of multiplexing on the voter model by studying behaviour of the standard aggregates; the negative outcome of which gives validity to considering the multiplex as a fundamentally novel, non-reducible structure [4].
[1] F. Vazquez, M. San Miguel and V. M. Eguiluz, Generic Absorbing Transition in Coevolution Dynamics, Physical Review Letters, 100, 108702 (2008)
[2] MD, M. San Miguel and V. E. Eguiluz, Absorbing and Shattered Fragmentation Transitions in Multilayer Coevolution, Physical Review E, 89, 062818, (2014)
[3] MD, V. M. Eguiluz and M. San Miguel, Noise in Coevolving Networks, Physical Review E, 92, 032803, (2015)
[4] MD, V. Nicosia, V. Latora and M. San Miguel, Irreducibility of Multilayer Network Dynamics: the Case of the Voter Model,arXiv:1507.08940 (2015)
We study the phenomenon of migration of the small molecular weight component of a binary
polymer mixture to the free surface using mean field and self-consistent field theories. By proposing a free energy functional that incorporates polymer-matrix elasticity explicitly, we compute the migrant volume fraction and show that it decreases significantly as the sample rigidity is increased. Estimated values of the bulk modulus suggest that the effect should be observable experimentally for rubber-like materials. This provides a simple way of controlling surface migration in polymer mixtures and can play an important role in industrial formulations, where surface migration often leads to decreased product functionality.
Network growth models with attachment rules governed by intrinsic node fitness are considered. Both direct and inverse problems of matching the growth rules to node degree distribution and correlation functions are given analytical solutions. It is found that the node degree distribution is generically broader than the distribution of fitness, saturating at power laws. The saturation mechanism is analysed using a feedback model with dynamically updated fitness distribution. The latter is shown to possess a nontrivial fixed point with a unique power-law degree distribution. Applications of field-theoretic methods to network growth models are also discussed.
Understanding the relation between functional anatomy and structural substrates is a major challenge in neuroscience. To study at the aggregate level the interplay between structural brain networks and functional brain networks, a new method will be described; it provides an optimal brain partition —emerging out of a hierarchical clustering analysis— and maximizes the “cross-modularity” index, leading to large modularity for both networks as well as a large within-module similarity between them . The brain modules found by this approach will be compared with the classical Resting State Networks, as well as with anatomical parcellations in the Automated Anatomical Labeling atlas and with the Broadmann partition.
Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social or biological networks. We prove that this strcutural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network
science and cosmology. However, our simple frameworks is unable to explain the emergence of community structure, a property that, along with scale-free degree distributions and strong clustering, is commonly found in real complex networks. Here we show how latent network geometry coupled with preferential attachment of the nodes to this geometry fills this gap. We call this mechanism geometric preferential attachment (GPA) and validate it against the Internet. GPA gives rise to soft communities that provide a different perspective on the community structure in networks. The connections between GPA and cosmological models, including inflation, are also discussed.
In a good solvent, a polymer chain assumes an extended configuration. As the solvent quality (or the temperature) is lowered, the configuration changes to globular, which is more compact. This collapse transition is also called coil-globule transition in the literature. Since the pioneering work by de Gennes, it is known that it corresponds to a tricritical point in a grand-canonical parameter space. In the most used lattice model to study it, the chain is represented by a self-avoiding walk on and the solvent is effectively taken into account by including attractive interactions between monomers on first neighbor sites which are not consecutive along a chain (SASAW's: self-attracting self-avoiding walks). We will review the model and show that small changes in it may lead to different phase diagrams, where the collapse transition is no longer a tricritial point. In particular, if the polymer is represented by a trail, which allows for multiple visits of sites but mantains the constraint of single visits of edges, we find two distinct polymerized phases besides the non-polymerized phase and the collapse transition becomes a bicritical point.
Mathematical modelling of cancer has a long history, but all cancer models can be categorized into two classes. "Non-spatial" models treat cancerous tumours as well-stirred bags of cells. This approach leads to nice, often exactly solvable models. However, real tumours are not well mixed and different subpopulations of cancer cells reside in different spatial locations in the tumour. "Spatial" models that aim at reproducing this heterogeneity are often very complicated and can only be studied through computer simulations.
In this talk I will present spatial models of cancer that are analytically soluble. These models demonstrate how growth and genetic composition of tumours is affected by three processes: replication, death, and migration of cancer cells. I will show what predictions these models make regarding experimentally accessible quantities such as the growth rate or genetic heterogeneity of a tumour, and discuss how they compare to clinical data.
Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment. Finally, the model is able to explain clinically observed patient variability w.r.t. the time-course of AF.
We study a standard model for the stochastic resonance from the point of view of dynamical systems. We present a framework for random dynamical systems with nonautonomous deterministic forcing and we prove the existence of an attracting random periodic orbit for a class of one-dimensional systems with a time-periodic component. In the case of the stochastic resonance, we use properties of the attractor to derive an indicator for the resonant regime.
Discrete Flow Mapping (DFM) was recently introduced as a mesh-based high frequency method for modelling structure-borne sound in complex structures comprised of two-dimensional shell and plate subsystems. In DFM, the transport of vibrational energy between substructures is typically described via a local interface treatment where wave theory is employed to generate reflection/transmission and mode coupling coefficients. The method has now been extended to model three-dimensional meshed structures, giving a wider range of applicability and also naturally leading to the question of how to couple the two- and three-dimensional substructures. In my talk I will present a brief overview of DFM, discuss numerical approaches and sketch ideas behind Discrete Flow Mapping in coupled two and three dimensional domains.
There is a recognized need to build tools capable of anticipaticiting tipping points in complex systems. Most commonly this is done by describing a tipping point as a bifurcation and using the formalism coming from phase transitions. Here we try a different approach, applicable to systems with high dimensions. A metastable state is described as a high-dimensional tipping point, a transition in this new optics is the escape of the system from such configuration, given by a rare perturbation parallel to un unstable direction.We will show our procedure by an application to two models: The Tangled Nature Model introduced by H. Jensen et al to mathematically explain the macroscopic intermittent dynamics of ecological systems, phenomenon known under the name of punctuated equilibrium. And high dimensional replicator systems with a stochastic element, first developed by J. Grujic. By describing the models' stochastic dynamics through a mean fied approximation we are able to gather information on the stability of the meta-stable configuration and predict the arrival of transitions.
Assessing systemic risk in financial markets and identifying systemically important financial institutions and assets is of great importance. In this talk I will consider two channels of propagation of financial systemic risk, (i) the common exposure to similar portfolios and fire sale spillovers and (ii) the liquidity cascades in the interbank networks. For each of them I will show how the use of statistical models of networks might be useful in systemic risk studies. In the first case, by applying the Maximum Entropy principle to the bipartite network of banks and assets, we propose a method to assess aggregated and single bank’s systemicness and vulnerability and to statistically test for a change in these variables when only the information on the size of each bank and the capitalization of the investment assets are available. In the second case, by inferring a stochastic block model from the e-MID interbank network, we show that the extraordinary ECB intervention during the sovereign debt crisis changed completely the large scale organization of such market and we identify the banks that, changing their strategy in response to the intervention, contributed most to the architectural network mutation.
Biology systems operate in the far from equilibrium regime and one defining feature of living organisms is their motility. In the hydrodynamic limit, a system of motile organisms may be viewed as a form of active matter, which has been shown to exhibit behaviour analogous to that found in equilibrium systems, such as phase separation in the case of motility-induced aggregation, and critical phase transition in incompressible active fluids. In this talk, I will use the concept of universality to categorise some of the emergent behaviour observed in active matter. Specifically, I will show that i) the coarsening kinetics of motility-induced phase separation belongs to the Lifshitz-Slyozov-Wagner universality class [1]; ii) the order-disorder phase transition in incompressible polar active fluids (IPAF) constitutes a novel universality class [2], and iii) the behaviour of IPAF in the ordered phase in 2D belongs to the Kardar-Parisi-Zhang universality class [3].
References:
[1] C. F. Lee, “Interface stability, interface fluctuations, and the Gibbs-Thomson relation in motility-induced phase separations,” arXiv: 1503.08674, 2015.
[2] L. Chen, J. Toner, and C. F. Lee, “Critical phenomenon of the order-disorder transition in incompressible active fluids,” New Journal of Phyics, 17, 042002, 2015.
[3] L. Chen, C. F. Lee, and J. Toner, “Birds, magnets, soap, and sandblasting: surprising connections to incompressible polar active fluids in 2D,” arXiv:1601.01924, 2016.
We find a correspondence between certain difference algebras and subshifts of finite type (SFTs) as studied in symbolic dynamics. The known theory of SFTs from symbolic dynamics allows us to make significant advances in difference algebra. Conversely, a `Galois theory' point of view from difference algebra allows us to obtain new structure results for SFTs.
Evolutionary Game Theory (EGT) represents the attempt to describe the evolution of populations by the formal frame of Game Theory, combined with principles and ideas of the Darwinian theory of evolution.
Nowadays, a long list of EGT applications spans from biology to socio-economic systems, where the emergence of cooperation constitutes one of the topics of major interest.
Here statistical physics allows to investigate EGT dynamics, in order to understand the relations between microscopic and macroscopic behaviors in these systems.
Following this approach, during this talk a new application of EGT will be shown. In particular, a new heuristic for solving optimization tasks, like the Traveling Salesman Problem (TSP), will be introduced. Results of this work show that EGT can be a powerful framework for studying a wide range of problems.
Many physical systems can be described by particle models. The interaction between these particles is often modeled by forces, which typ- ically depend on the inter-particle distance, e.g., gravitational attraction in celestial me- chanics, Coulomb forces between charged par- ticles or swarming models of self-propelled par- ticles. In most physical systems Newtons third law of actio-reactio is valid. However, when considering a larger class of interacting par- ticle models, it might be crucial to introduce an asymmetry into the interaction terms, such that the forces not only depend on the dis- tance, but also on direction. Examples are found in pedestrian models, where pedestrians typically pay more attention to people in front than behind, or in traffic dynamics, where dri- vers on highways are assumed to adjust their speed according to the distance to the preced- ing car. Motivated by traffic and pedestrian models, it seems valuable to study particle sys- tems with asymmetric interaction where New- tons third law is invalid. Here general parti- cle models with symmetric and asymmetric re- pulsion are studied and investigated for finite- range and exponential interaction in straight corridors and annulus. In the symmetric case transitions from one-to multi-lane (zig-zag) be- havior including multi-stability are observed for varying particle density and for a varying curvature with fixed density. When the asym- metry of the interaction is taken into account a new “bubble”-like pattern arises when the dis- tance between lanes becomes spatially mod- ulated and changes periodically in time, i.e. peristaltic motion emerges. We find the tran- sition from the zig-zag state to the peristaltic state to be characterized by a Hopf bifurcation.
Granular matter is the prototypical example of systems that jam when subject to an external loading. Its athermal character, i.e. the fact that the motion of individual grains is insensitive to thermal fluctuations, makes its statistical properties a priori dependent on the protocol used to reach the jammed state. In this talk we will look at two distinct examples from different classes of such protocols: single-step protocols and sequential protocols. Depending on the context, we will see how one can try to extend the definition of concepts borrowed from statistical thermodynamics such as entropy, ensembles and ergodicity so that they remain meaningful for jammed granular matter.
Zeros of vibrational modes have been fascinating physicists for
several centuries. Mathematical study of zeros of eigenfunctions goes
back at least to Sturm, who showed that, in dimension d=1, the n-th
eigenfunction has n-1 zeros. Courant showed that in higher dimensions
only half of this is true, namely zero curves of the n-th eigenfunction of
the Laplace operator on a compact domain partition the domain into at
most n parts (which are called "nodal domains").
It recently transpired that the difference between this upper bound
and the actual value can be interpreted as an index of instability of
a certain energy functional with respect to suitably chosen
perturbations. We will discuss two examples of this phenomenon: (1)
stability of the nodal partitions of a domain in R^d with respect to a
perturbation of the partition boundaries and (2) stability of a graph
eigenvalue with respect to a perturbation by magnetic field. In both
cases, the "nodal defect" of the eigenfunction coincides with the
Morse index of the energy functional at the corresponding critical
point. We will also discuss some applications of the above results.
Based on arXiv:1103.1423, CMP'12 (with R.Band, H.Raz, U.Smilansky),
arXiv:1107.3489, GAFA'12 (with P.Kuchment, U.Smilansky),
arXiv:1110.5373, APDE'13
arXiv:1212.4475, PTRSA'13 to appear (with T.Weyand),
arXiv:1503.07245, JMP'15 to appear (with R.Band and T.Weyand)
We all need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. In this seminar I will present two laboratory experiments which focus on the impact of information and reputation on human behavior when people engage cooperative interactions on dynamic networks. In the first study, we investigate whether and how the ability to make or break links in social networks fosters cooperation, paying particular attention to whether information on an individual’s actions is freely available to potential partners. Studying the role of information is relevant as complete knowledge on other people’s actions is often not available for free. In the second work, we focus our attention on the role of individual reputation, an indispensable tool to guide decisions about social and economic interactions with individuals otherwise unknown. Usually, information about prospective counterparts is incomplete, often being limited to an average success rate. Uncertainty on reputation is further increased by fraud, which is increasingly becoming a cause of concern. To address these issues, we have designed an experiment where participants could spend money to have their observable cooperativeness increased. Our findings point to the importance of ensuring the truthfulness of reputation for a more cooperative and fair society.
Consider a continuously evolving stochastic process that gets interrupted at random times with big changes. Examples are financial crashes due to a sudden fall in stock prices, a sudden decrease in population due to a natural catastrophe, etc. Question: How do these sudden interruptions affect the observable properties at long times?
As a first answer, we consider simple diffusion interrupted at random times by long jumps associated with resets to the initial state. We will discuss recent advances in characterizing the long-time properties of such a dynamics, thereby unveiling a host of rich observable properties. Time permitting, I will discuss the extension of these studies to many-body interacting systems.
There has been emerging recent interest towards the study of the social
networks in cultural works such as novels and films. Such character networks
exhibit many of the properties of complex networks such as skewed degree
distribution and community structure, but may be of relatively small order
with a high multiplicity of edges. We present graph extraction,
visualization, and network statistics for three novels: Twilight by
Stephanie Meyer, Steven King's The Stand, and J.K. Rowling's Harry Potter
and the Goblet of Fire. Coupling with 800 character networks from films
found in the Moviegalaxies database, we compare the data sets to simulations
from various stochastic complex networks models including the Chung-Lu
model, the configuration model, and the preferential attachment model. We
describe our model selection experiments using machine learning techniques
based on motif (or small subgraph) counts. The Chung-Lu model best fits
character networks and we will discuss why this is the case.
The title of my talk was the topic of an Advanced Study Group for which I was convenor last year [1]. In my talk I will give a brief outline about our respective research activities. It should be understandable to a rather general audience.
A question that attracted a lot of attention in the past two decades is whether biologically relevant search strategies can be identified by statistical data analysis and mathematical modeling. A famous paradigm in this field is the Levy flight hypothesis. It states that under certain mathematical conditions Levy dynamics, which defines a key concept in the theory of anomalous stochastic processes, leads to an optimal search strategy for foraging organisms. This hypothesis is discussed very controversially in the current literature [2]. After briefly introducing the stochastic processes of Levy flights and Levy walks I will review examples and counterexamples of experimental data and their analyses confirming and refuting the Levy flight hypothesis. This debate motivated own work on deriving a fractional diffusion equation for an n-dimensional correlated Levy walk [3], studying search reliability and search efficiency of combined Levy-Brownian motion [4], and investigating stochastic first passage and first arrival problems [5].
[1] www.mpipks-dresden.mpg.de/~asg_2015(link is external)
[2] R.Klages, Search for food of birds, fish and insects, invited book chapter in: A.Bunde, J.Caro, J.Kaerger, G.Vogl (Eds.), Diffusive Spreading in Nature, Technology and Society. (Springer, Berlin, 2017).
[3] J.P.Taylor-King, R.Klages, S.Fedotov, R.A.Van Gorder, Phys.Rev.E 94, 012104 (2016).
[4] V.V.Palyulin, A.Chechkin, R.Klages, R.Metzler, J.Phys.A: Math.Theor. 49, 394002 (2016).
[5] G.Blackburn, A.V.Chechkin, V.V.Palyulin, N.W.Watkins, R.Klages, tbp.
In 1972 Robert May argued that (generic) complex systems become unstable to small displacements from equilibria as the system complexity increases. In search of a global signature of this instability transition, we consider a class of nonlinear dynamical systems whereby N degrees of freedom are coupled via a smooth homogeneous Gaussian vector field. Our analysis shows that with the increase in complexity, as measured by the number of degrees of freedom and the strength of interactions relative to the relaxation strength, such systems undergo an abrupt change from a simple set of equilibria (a single stable equilibrium for N large) to a complex set of equilibria. Typically, none of these equilibria are stable and their number is growing exponentially with N. This suggests that the loss of stability manifests itself on the global scale in an exponential explosion in the number of equilibria. [My talk is based on a joint paper with Yan Fyodorov and on an unpublished work with Gerard Ben Arous and Yan Fyodorov]
We study the spectrum of random geometric graphs using random matrix theory. We look at short range correlations in the level spacings via the nearest neighbour spacing distribution and long range correlations via the spectral rigidity. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find that the spectral statistics of random geometric graphs fits the universality of random matrix theory found in other random graph models.
I will discuss defining networks from observations of tree species. This talk will discuss how to quantify co-associations between multiple and inhomogeneous point-process patterns, and how to identify communities, or groups, in such observations. The work is motivated by the distribution of tree and shrub species from a 50 ha forest plot on Barro Colorado Island. We show that our method can be used to construct biologically meaningful subcommunities that are linked to the spatial structure of the plant community.
This is joint work with David Murrell and Anton Flugge.
Self-propelled particles are able to extract energy from their environment to perform a directed motion. Such a dynamics lead to a rich phenomenology that cannot be accounted for by equilibrium physics arguments. A striking example is the possibility for repulsive particles to undergo a phase separation, as reported in both experimental and numerical realizations. On a specific model of self-propulsion, we explore how far from equilibrium the dynamics operate. We quantify the breakdown of the time reversal symmetry, and we delineate a bona fide effective equilibrium regime. Our insight into this regime is based on the analysis of fluctuations and response of the particles. Finally, we discuss how the nonequilibrium properties of the dynamics can also be captured at a coarse-grained level, thus allowing a detailed examination of the spatial structure that underlies departures from equilibrium.
Life originated as single celled organisms, and multicellularity arose multiple times across evolutionary history. Increasingly more complex cellular arrangements were selected for, conferring organisms with an adaptive advantage. Uncovering the properties of these synergistic cellular configurations is central to identifying these optimized organizational principles, and to establish structure-function relationships. We have developed methods to capture all cellular associations within plant organs using a combination of high resolution 3D microscopy and computational image analysis. These multicellular organs are abstracted into cellular connectivity networks and analysed using a complex system approach. This discretization of cellular organization enables the topological properties of global 3D cellular complexity in organs to be examined for the first time. We find that the organizing properties of global cellular interactions are tightly conserved both within and across species in diverse plant organs. Seemingly stochastic gene expression patterns can also be predicted based on the context of cells within organs. Finally, evidence for optimization in cellular configurations and transport processes have emerged as a result of natural selection. This provides a framework and insight to investigate the structure-function relationship at the level of cell organization within complex multicellular organs.
Recently, there has been a surge of interest in an old result discussed by Mainardi et al. [1] that relates pseudo-differential relaxation equations and semi-Markov processes. Meerschaert and Toaldo presented a rigorous theory [2] and I recently applied these ideas to semi-Markov graph dynamics [3]. In this talk, I will present several examples and argue that further work is needed to study the solutions of pseudo-differential relaxation equations and their properties.
References
[1] Mainardi, Francesco, Raberto, Marco, Gorenflo, Rudolf and Scalas, Enrico (2000) Fractional calculus and continuous-time finance II: the waiting-time distribution. Physica A Statistical Mechanics and its Applications, 287 (3-4). pp. 468-481.
[2] Meerschaert, Mark M and Toaldo, Bruno (2015) Relaxation patterns and semi-Markov dynamics arXiv:1506.02951 [math.PR].
[3] Raberto, Marco, Rapallo, Fabio and Scalas, Enrico (2011) Semi-Markov graph dynamics. PLoS ONE, 6 (8). e23370. ISSN 1932-6203. Georgiou, Nicos, Kiss, Istvan and Scalas, Enrico (2015) Solvable non Markovian dynamic network. Physical Review E, 92 (4). 042801. ISSN 1539-3755.
Time-dependency adds an extra dimension to network science computations, potentially causing a dramatic increase in both storage requirements and computation time. In the case of Katz-style centrality measures, which are based on the solution of linear algebraic systems,allowing for the arrow of time leads naturally to full matrices that keep track of all possible routes for the flow of information. Such a build-up of intermediate data can make large-scale computations infeasible. In this talk, we describe a sparsification technique that delivers accurate approximations to the full-matrix centrality rankings, while retaining the level of sparsity present in the network time-slices. With the new algorithm, as we move forward in time the storage cost remains fixed and the computational cost scales linearly, so the overall task is equivalent to solving a single Katz-style problem at each new time point.
Networks, virtually in any domain, are dynamical entities. Think for example
about social networks. New nodes join the system, others leave it, and links
describing their interactions are constantly changing. However, due to absence
of time-resolved data and mathematical challenges, the large majority of
research in the field neglects these features in favor of static
representations. While such approximation is useful and appropriate in some
systems and processes, it fails in many others. Indeed, in the case of sexual
transmitted diseases, ideas, and meme spreading, the co-occurrence, duration
and order of contacts are crucial ingredients.
During my talk, I will present a novel mathematical framework for the modeling
of highly time-varying networks and processes evolving on their fabric. In
particular, I will focus on epidemic spreading, random walks, and social
contagion processes on temporal networks.
Biological invasion can be generically defined as the uncontrolled spread and proliferation of species to areas outside of their native range, hence called alien, usually following by their unintentional introduction by humans. A conventional view of the alien species spatial spread is that it occurs via the propagation of a travelling population front. In a realistic 2D system, such a front normally separates the invaded area behind the front from the uninvaded areas in front of the front. I will show that there is an alternative scenario called “patchy invasion” where the spread takes place via the spatial dynamics of separate patches of high population density with a very low density between them, and a continuous population front does not exist at any time. Patchy invasion has been studied theoretically in much detail using diffusion-reaction models, e.g. see Chapter 12 in [1]. However, diffusion-reaction models have many limitations; in particular, they almost completely ignore the so-called long distance dispersal (usually associated with stochastic processes known as Levy flights). Correspondingly, I will then present some recent results showing that patchy invasion can occur as well when long distance dispersal is taken into account [2]. In this case, the system is described by integral-difference equations with fat-tailed dispersal kernels. I will also show that apparently minor details of kernel parametrization may have a relatively strong effect on the rate of species spread.
[1] Malchow H, Petrovskii SV, Venturino E (2008) Spatiotemporal Patterns in Ecology and Epidemiology: Theory, Models, Simulations. Chapman & Hall / CRC Press, 443p.
[2] Rodrigues LAD, Mistro DC, Cara ER, Petrovskaya N, Petrovskii SV (2015) Bull. Math. Biol. 77, 1583-1619.
This is part of a series of collaborative meetings between Bath, Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, St Andrews, Surrey and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
For speakers, abstracts and schedule, see the meeting web page.
The cell cytoskeleton can be successfully modelled as an 'active gel'. This is gel that is driven out of equilibrium by the consumption of biochemical energy. In particular myosin molecular motors exert forces on actin filaments resulting in contraction. Theoretical studies of active matter over the past two decades have shown it to have rich dynamics and behaviour. Here I will discuss finite droplets or active matter in which interactions with the boundaries play an important role. Displacement of the whole droplet is generated by flows of the contractile active gel inside. I will show how this depends on the average direction of cytoskeleton filaments and the boundary conditions at the edge of the model cell, which are set by interactions with the external environment. I will consider the shape deformation and movement of such droplets. Inspired by applications to cell movement and deformation I will discuss the behaviour of a layer of active gel surrounding a passive solid object as a model for the cell nucleus.
Abstract: The field of random matrix theory (RMT) was born out experimental observations of the scattering amplitudes of large atomic nuclei in the late 1950s. It led Wigner, Dyson and others to develop a theory comprising three standard random matrix ensembles, termed the Gaussian Orthogonal, Unitary and Symplectic Ensembles, which predicted the distribution of such resonances in various situations. Until recently it was a standard consensus that observing this third type of statistics (the GSE) required a quantum spin, however, together with S. Mueller and M. Sieber we proposed a quantum graph that would have such statistics, but without the spin requirement. Recently, this quantum graph has been realised in a laboratory setting, leading to the first experimental observation of GSE statistics, some 60 years after the conception of RMT. I will present the mathematical framework behind the construct of this system and the ideas which led to its conception.
Two-dimensional lattice paths and polygons such as self-avoiding walks and polygons and subclasses thereof are often used as models for biological vesicles and cell membranes. When tuning the pressure acting on the wall of the vesicle or the strength of the interactions between different parts of the chain, one often observes a phase transition between a deflated or crumpled towards an inflated or globule-like state. For models including self-avoiding polygons, staircase polygons, Dyck and Schröder paths, Bernoulli meanders and bridges, the phase transition between the different regimes is (conjectured to be) characterised by two critical exponents and a one-variable scaling function involving the Airy function. John Cardy conjectured that by turning on further interactions, one should be able to generate multicritical points of higher order, described by multivariate scaling functions involving generalised Airy integrals.
Networks form the substrate of a wide variety of complex systems, ranging
from food webs, gene regulation, social networks, transportation and the
internet. Because of this, general network abstractions allow for the
characterization of these different systems under a unified mathematical
framework. However, due to the sheer size and complexity of many of theses
systems, it remains an open challenge to formulate general descriptions of
their structures, and to extract such information from data. In this talk, I
will describe a principled approach to this task, based on the elaboration
of probabilistic generative models, and their statistical inference from
data. In particular, I will present a general class of generative models
that describe the multilevel modular structure of network systems, as well
as efficient algorithms to infer their parameters. I will highlight the
common pitfalls present in more heuristic methods of capturing this type of
structure, and demonstrate the efficacy of more principled methods based on
Bayesian statistics.
Topology is one of the oldest and more relevant branches of mathematics, and it has provided an expressive and affordable language which is progressively pervading many areas of mathematics, computer science and physics.
Using examples taken from work drug-altered brain functional networks, I will illustrate the type of novel insights that algebraic topological tools are providing in the context of neuroimaging.
I will then show how the comparison of homological features of structural and functional brain networks across a large age span highlights the presence of a globally conserved topological skeletons and of a compensation mechanism modulating the localization of functional homological features. Finally, with an eye to altered cognitive control in disease and early ageing, I will introduce preliminary theoretical results on the modelization of multitasking capacities from a statistical mechanical perspective and show that even a small overlap between tasks strongly limits overall parallel capacity to a degree that substantially outpaces gains by increasing network size.
One reason for the success of one-particle quantum graph models is that their spectra are determined by secular equations involving finite-dimensional determinants. In general, one cannot expect this to extend to interacting many-particle models. In this talk I will introduce some specific two-particle quantum graph models with interactions that allow one to express eigenfunctions in terms of a Bethe ansatz. From this a secular equation will be determined, and eigenvalues can be calculated numerically. The talk is based on joint work with George Garforth.
Epidemic processes on temporally varying networks are complicated by complexity
of both network structure and temporal dimensions. It is yet under debate what
factors make some temporal networks promote infection at a population level
whereas other temporal networks suppress it. We develop a theory to understand
the susceptible-infected-susceptible epidemic model on arbitrary temporal
networks, where each contact is used for a finite duration. We show that, under
certain conditions, temporality of networks lessens the epidemic threshold such
that infections persist more easily in temporal networks than in their static
counterparts. We further show that the Lie commutator bracket of the adjacency
matrices at different times (precisely speaking, commutator's norm) is a useful
index to assess the impact of temporal networks on the epidemic threshold
value.
In a seminal paper Ruelle showed that the long time asymptotic behaviour of analytic hyperbolic systems can be understood in terms of the eigenvalues, also known as Pollicott-Ruelle resonances, of the so-called Ruelle transfer operator, a compact operator acting on a suitable Banach space of holomorphic functions.
Until recently, there were no examples of Ruelle transfer operators arising from analytic hyperbolic circle or toral maps, with non-trivial spectra, that is, spectra different from {0,1}.
In this talk I will survey recent work with Wolfram Just and Julia Slipantschuk on how to construct analytic expanding circle maps or analytic Anosov diffeomorphisms on the torus with explicitly computable non-trivial Pollicott-Ruelle resonances. I will also discuss applications of these results.
Internal gravity waves play a primary role in geophysical fluids: they contribute significantly to mixing in the ocean and they redistribute energy and momentum in the middle atmosphere. Until recently, most of the studies were focused on plane-wave solutions. However, these solutions are not a satisfactory description of most geophysical manifestations of internal gravity waves, and it is now recognized that internal wave beams with a locally confined profile are ubiquitous in the geophysical context.
We will discuss the reason for their ubiquity in stratified fluids, since they are solutions of the nonlinear governing equations. Moreover, in the light of the recent experimental and analytical studies of those internal gravity beams, it is timely to discuss the two main mechanisms of instability for those beams: the triadic resonant instability and the streaming instability.
We start by giving a short introduction about quasiperiodically forced interval maps. To distinguish smooth and non-smooth saddle-node bifurcations by means of a topological invariant, we introduce two new notions in the low-complexity regime, namely, asymptotic separation numbers and amorphic complexity. We present recent results with respect to these two novel concepts for additive and multiplicative forcing. This is joint work with G. Fuhrmann and T. Jäger.
Graphs can encode information from datasets that have a natural representation in terms of a network (for example datasets describing collaborations or social relations among individulas), as well as from data that can be mapped into graphs due to their intrinsic correlations, such as time series or images. Characterising the structure of complex networks at the micro and mesocale can thus be of fundamental importance to extract relevant information from our
data. We will present some algorithms useful to characterise the structure of particular classes of networks:
i) multiplex networks, describing systems where interactions of different
nature are involved,
and ii) visibility graphs, that can be extracted from time series.
One often aims to describe the collective behaviour of an infinite number of particles by the differential equation governing the evolution of their density. The theory of hydrodynamic limits addresses this problem. In this talk, the focus will be on linking the particles with the geometry of the macroscopic evolution. Zero-range processes will be used as guiding example. The geometry of the associated hydrodynamic limit, a nonlinear diffusion equation, will be derived. Large deviations serve as a tool of scale-bridging to describe the many-particle dynamics by partial differential equations (PDEs) revealing the geometry as well. Finally, time permitting we will discuss the near-minimum structure, studying the fluctuations around the minimum state described by the deterministic PDE.
The nodal surplus of the $n$-th eigenfunction of a graph is defined as
the number of its zeros minus $(n-1)$. When the graph is composed of
two or more blocks separated by bridges, we propose a way to define a
"local nodal surplus" of a given block. Since the eigenfunction index
$n$ has no local meaning, the local nodal surplus has to be defined in
an indirect way via the nodal-magnetic theorem of Berkolaiko and
Weyand.
We will discuss the properties of the local nodal surplus and their
consequences. In particular, it also has a dynamical interpretation
as the number of zeros created inside the block (as opposed to those
who entered it from outside) and its symmetry properties allow us to
prove the long-standing conjecture that the nodal surplus distribution
for graphs with $\beta$ disjoint loops is binomial with parameters
$(\beta, 1/2)$. The talk is based on a work in progress with Lior Alon
and Ram Band.
For a family of rational maps, results by Lyubich, Mané-Sad-Sullivan and DeMarco provide a fairly complete understanding of dynamical stability. I will review this one-dimensional theory and present a recent generalisation to several complex variables. I will focus on the arguments that do not readily generalise to this setting, and introduce the tools and ideas that allow one to overcome these problems.
Kuramoto Sakaguchi type models are probably the simplest and most generic approach to investigate phase coupled
oscillators. Particular partially synchronised solutions, so called chimera states, have received recently a great deal of attention. Dynamical behaviour of this type will be discussed in the context of time delay dynamics caused by a finite propagation speed of signals.
The function of many real-world systems that consist of interacting oscillatory units depends on their collective dynamics such as synchronization. The Kuramoto model, which has been widely used to study collective dynamics in oscillator networks, assumes that interactions between oscillators is determined by the sine of the differences between pairs of oscillator phases. We show that more general interactions between identical phase oscillators allow for a range of collective effects, ranging from chaotic fluctuations to localized frequency synchrony patterns.
We will present in this talk a 1-parameter family of affine interval exchange transformations (AIET) which displays various dynamical behaviours. We will see that a fruitful viewpoint to study such a family is to associate to it what we call a dilation surface, which should be thought of as the analogue of a translation surface in this setting.
The study of this example is a good motivation to several conjectures on the dynamics of AIETs that we will try to expose.
The Paris conference 2015 set a path to limit climate change to "well below 2°C". To reach this goal, integrating renewable energy sources into the electrical power grid is essential but poses an enormous challenge to the existing system, demanding new conceptional approaches. In this talk, I outline some pressing challenges to the power grid,
highlighting how methods from Mathematics and Physics can potentially support the energy transition.
In particular, I present our latest research on power grid fluctuations and how they threaten robust grid operation. For our analysis, we collected frequency recordings from power grids in North America, Europe and Japan, noticing significant deviations from Gaussianity. We develope a coarse framework to analytically characterize the impact of arbitrary noise distributions as well as a superstatistical approach. Overall, we identify energy trading as a significant contribution to today's frequency fluctuation and effective damping of the grid as a controlling factor to reduce fluctuation risks.
This is part of a series of collaborative meetings between Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, St Andrews, Surrey and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
1:00pm - 2:00pm: Dmitry Dolgopyat (Maryland), joint with the QMUL Probability and Applications Seminar
2:30pm - 3:30pm: Dalia Terhesiu (Exeter)
4:00pm - 5:00pm: Tuomas Sahlsten (Manchester)
For more information, visit the website:
http://www.maths.qmul.ac.uk/~ob/oneday_meeting/oneday17/onedaydynamics_q...
Using inducing schemes (generalised first return maps) to obtain uniform expansion is a standard tool for (smooth) interval maps, in order to prove, among other things, the existence of invariant measures, their mixing rates and stochastic laws. In this talk I would like to present joint work with Mike Todd (St Andrews) on how this can be applied to maps on the brink of being dissipative. We discuss a family f_{λ} of Fibonacci maps for which Lebesgue-a.e. point is recurrent or transient depending on the parameter λ. The main tool is a specific induced Markov map F_{λ} with countably many branches whose lengths converge to zero. Avoiding the difficulties of distortion control by starting with a countably piecewise linear unimodal map, we can identify the transition from conservative to dissipative exactly, and also describe in great detail the impact of this transition on the thermodynamic formalism of the system (existence and uniqueness of equilibrium states, (non)analyticity of the pressure function and phase transitions).
Characterizing how we explore abstract spaces is key to understand our (ir)rational behaviour and decision making. While some light has been shed on the navigation of semantic networks, however, little is known about the mental exploration of metric spaces, such as the one dimensional line of numbers, prices, etc. Here we address this issue by investigating the behaviour of users exploring the “bid space” in online auctions. We find that they systematically perform Lévy flights, i.e., random walks whose step lengths follow a power-law distribution. Interestingly, this is the best strategy that can be adopted by a random searcher looking for a target in an unknown environment, and has been observed in the foraging patterns of many species. In the case of online auctions, we measure the power-law scaling over several decades, providing the neatest observation of Lévy ﬂights reported so far. We also show that the histogram describing single individual exponents is well peaked, pointing out the existence of an almost universal behaviour. Furthermore, a simple model reveals that the observed exponents are nearly optimal, and represent a Nash equilibrium. We rationalize these ﬁndings through a simple evolutionary process, showing that the observed behaviour is robust against invasion of alternative strategies. Our results show that humans share with the other animals universal patterns in general searching processes, and raise fundamental issues in cognitive, behavioural and evolutionary sciences.
A polynomial-like mapping is a proper holomorphic map f : U′ → U, where U′, U ≈ D, and U′ ⊂⊂ U. This definition captures the behaviour of a polynomial in a neighbourhood of its filled Julia set. A polynomial-like map of degree d is determined up to holomorphic conjugacy by its internal and external classes, that is, the (conjugacy classes of) the restrictions to the filled Julia set and its complement. In particular the external class is a degree d real-analytic orientation preserving and strictly expanding self-covering of the unit circle: the expansivity of such a circle map implies that all the periodic points are repelling, and in particular not parabolic.
We extended the polynomial-like theory to a class of parabolic mappings which we called parabolic-like mappings. In this talk we present the parabolic- like mapping theory, and its uses in the family of degree 2 holomorphic correspondences in which matings between the quadratic family and the modular group lie.
The study of complex human systems has become more important than ever as the risks facing human societies from the human and social factors are clearly increasing. However, disciplines, such as psychology and sociology, haven't made any significant scientific progress and they are immersed in theoretical approaches and empirical methodologies developed more than a 100 years ago. In this talk, I would like to point to the promise of applying ideas from complex systems and developing new computational tools for big data reservoirs in order to address the abovementioned challenge. I will provide several case-studies illustrating the benefits of the proposed approach and several open challenges that need to be addressed.
References (for illustration)
1. Neuman, Y. (2014). Introduction to computational cultural psychology. Cambridge University Press.
2. Neuman, Y., & Cohen, Y. (2014). A vectorial semantics approach to personality assessment. Scientific reports, 4.
3. Neuman, Y., Assaf, D., Cohen, Y., & Knoll, J. L. (2015). Profiling school shooters: automatic text-based analysis. Frontiers in psychiatry,
Let f be a smooth volume preserving diffeomorphism of a compact manifold and φ a known smooth function of zero integral with respect to the volume. The linear cohomological equation over f is
ψ ○ f - ψ = φ
where the solution ψ is required to be smooth.
Diffeomorphisms f for which a smooth solution ψ exists for every such smooth function φ are called Cohomologically Rigid. Herman and Katok have conjectured that the only such examples up to conjugation are Diophantine rotations in tori.
We study the relation between the solvability of this equation and the fast approximation method of Anosov-Katok and prove that fast approximation cannot construct counter-examples to the conjecture.
The motion of a tracer particle in a complex medium typically exhibits anomalous diffusive patterns, characterised, e.g, by a non-liner mean-squared displacement and/or non-Gaussian statistics. Modeling such fluctuating dynamics is in general a challenging task, that provides, despite all, a fundamental tool to probe the rheological properties of the environment. A prominent example is the dynamics of a tracer in a suspension of swimming microorganisms, like bacteria, which is driven by the hydrodynamic fields generated by the active swimmers. For dilute systems, several experiments confirmed the existence of non-Gaussian fat tails in the displacement distribution of the probe particle, that has been recently shown to fit well a truncated Lévy distribution. This result was obtained by applying an argument first proposed by Holtsmark in the context of gravitation: the force acting on the tracer is the superposition of the hydrodynamic fields of spatially random distributed swimmers. This theory, however, does not clarify the stochastic dynamics of the tracer, nor it predicts the non monotonic behaviour of the non-Gaussian parameter of the displacement distribution. Here we derive the Langevin description of the stochastic motion of the tracer from microscopic dynamics using tools from kinetic theory. The random driving force in the equation of motion is a coloured Lévy Poisson process, that induces power-law distributed position displacements. This theory predicts a novel transition of their characteristic exponents at different timescales. For short ones, the Holtzmark-type scaling exponent is recovered; for intermediate ones, it is larger. Consistently with previous works, for even longer ones the truncation appears and the distribution converge to a Gaussian. Our approach allows to employ well established functional methods to characterize the displacement statistics and correlations of the tracer. In particular, it qualitatively reproduces the non monotonic behaviour of the non-Gaussian parameter measured in recent experiments.
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has highlighted the gap between success and intrinsic quality. As a result, high quality content that receives low attention remains invisible and relegated to the long tail of the popularity distribution. Moreover, the production and consumption of content is influenced by the underlying social network connecting users by means of friendship or follower-followee relations. This talk will present a large scale study on the complex intertwinement between quality, popularity and social ties in an online photo sharing platform, proposing a methodology to democratize exposure and foster long term users engagement.
Is there a fundamental minimum to the thermodynamic cost of precision in non-equilibrium processes? Here, we investigate this question, which has recently triggered notable research efforts [1,2], for ballistic transport in a multi-terminal geometry. For classical systems, we derive a universal trade-off relation between total dissipation and the precision, at which particles are extracted from individual reservoirs [3]. Remarkably, this bound becomes significantly weaker in presence of a magnetic field breaking time-reversal symmetry. By working out an explicit model for chiral transport enforced by a strong magnetic field, we show that our bounds are tight. Beyond the classical regime, we find that, in quantum systems far from equilibrium, correlated exchange of particles makes it possible to exponentially reduce the thermodynamic cost of precision [3]. Uniting aspects from statistical and mesoscopic physics, our work paves the way for the design of precise and efficient transport devices.
[1] A. C Barato, U. Seifert; Phys. Rev. Lett. 114, 158101 (2015).
[2] T. R. Gingrich, J. M. Horowitz, N. Perunov, J. L. England; Phys. Rev. Lett. 116, 120601 (2016).
[3] K. Brandner, T. Hanazato, K. Saito; arXiv:1710.04928 (2017).
The topology of any complex system is key to understanding its structure and function. Fundamentally, algebraic topology guarantees that any system represented by a network can be understood through its closed paths. The length of each path provides a notion of scale, which is vitally important in characterizing dominant modes of system behavior. Here, by combining topology with scale, we prove the existence of universal features which reveal the dominant scales of any network. We use these features to compare several canonical network types in the context of a social media discussion which evolves through the sharing of rumors, leaks and other news. Our analysis enables for the first time a universal understanding of the balance between loops and tree-like structure across network scales, and an assessment of how this balance interacts with the spreading of information online. Crucially, our results allow networks to be quantified and compared in a purely model-free way that is theoretically sound, fully automated, and inherently scalable. This work is joint with Pierre-Andre Maugis and Patrick Wolfe.
This is part of a series of collaborative meetings between Bristol, Exeter, Leicester, Loughborough, Manchester, Queen Mary, St Andrews, Surrey and Warwick, funded by a Scheme 3 grant from the London Mathematical Society.
1:00pm - 2:00pm: Dmitry Dolgopyat (Maryland), joint with the QMUL Probability and Applications Seminar
Local Limit Theorem for Nonstationary Markov chains
2:30pm - 3:30pm: Dalia Terhesiu (Exeter)
The Pressure Function for Infinite Equilibrium Measures
4:00pm - 5:00pm: Sebastian van Strien (Imperial College)
Heterogeneously Coupled Maps. Coherent behaviour and reconstructing network from data
For more information, visit the website:
http://www.maths.qmul.ac.uk/~ob/oneday_meeting/oneday17/onedaydynamics_q...
The modern world can be best described as interlinked networks, of individuals, computing devices and social networks; where information and opinions propagate through their edges in a probabilistic or deterministic manner via interactions between individual constituents. These interactions can take the form of political discussions between friends, gossiping about movies, or the transmission of computer viruses. Winners are those who maximise the impact of scarce resource such as political activists or advertisements, or by applying resource to the most influential available nodes at the right time. We developed an analytical framework, motivated by and based on statistical physics tools, for impact maximisation in probabilistic information propagation on networks; to better understand the optimisation process macroscopically, its limitations and potential, and devise computationally efficient methods to maximise impact (an objective function) in specific instances.
The research questions we have addressed relate to the manner in which one could maximise the impact of information propagation by providing inputs at the right time to the most effective nodes in the particular network examined, where the impact is observed at some later time. It is based on a statistical physics inspired analysis, Dynamical Message Passing that calculates the probability of propagation to a node at a given time, combined with a variational optimisation process. We address the following questions: 1) Given a graph, a budget and a propagation/infection process, which nodes are best to infect to maximise the spreading? 2) Maximising the impact on a subset of particular nodes at given times, by accessing a limited number of given nodes. 3) Identify the most appropriate vaccination targets to isolate a spreading disease through containment of the epidemic. 4) Optimal deployment of resource in the presence of competitive/collaborative processes. We also point to potential applications.
Lokhov A.Y. and Saad D., Optimal Deployment of Resources for Maximizing Impact in Spreading Processes, PNAS 114 (39), E8138 (2017)
Parkinson’s disease is a neurodegenerative condition characterised by loss
of neurons producing dopamine in the brain. It affects 7 million people
worldwide, making it the second most common neurodegenerative disease, and
it currently has no cure. The difficulty of developing treatments and
therapies lies in the limited understanding of the mechanisms that induce
neurodegeneration in the disease. Experimental evidence suggests that the
aggregation alpha synuclein monomers into toxic oligomeric forms can be the
cause of dopaminergic cell death and that their detection in cerebrospinal
fluid could be a potential biomarker for the disease. In addition, the study
of these alpha synuclein aggregates and their aggregation pathways could
potentially lead to early diagnostic of the disease. However, the small size
of alpha synuclein monomers and the heterogeneity of the oligomers makes
their detection under conventional bulk approaches extremely challenging,
often requiring sample concentrations orders of magnitude higher than
clinically relevant. Nanopore sensing techniques offer a powerful platform
to perform such analysis, thanks to their ability to read the information of
a single molecule at a time while requiring very low sample volume (µl).
This project presents a novel nanopore configuration capable of addressing
these limitations: two nanopores separated by a 20nm gap joined together by
a zeptolitre nanobridge. The confinement slows molecules translocating
through the nanobridge by up to two orders of magnitude compared to standard
nanopore configurations, improving significantly the limits of detection.
Furthermore, this new nanopore setting is size adaptable, and can be used to
detect a variety of analytes.
Biological systems including cancer are composed of interactions among individuals carrying different traits. I build stochastic models to capture those interactions and analyse the diversity patterns arising in population level based on these individual interactions. I would like to use this seminar to introduce different topics I work in mathematical biology, including evolutionary game theoretical models on random mutations (errors introduced during individual reproduction), application of random mutation models to predator-prey systems, as well as the evolution of resistance in ovarian cancer.
Cancers have been shown to be genetically diverse populations of cells. This diversity can affect treatment and the prognosis of patients. Furthermore, the composition of the population may change overtime, it is therefore instructive to think of cancers as a diverse dynamic population of cells which is subject to the rules of evolution. Population genetics, a quantitative description of the rules of evolution in terms of mutation, selection (outgrowth of fitter sub-populations) and drift (stochastic effects) can be adapted and applied to the study of cancer as an evolutionary system. Using this mathematical description together with genomic sequencing data and Bayesian inference we measure evolutionary dynamics in human cancers on a patient by patient basis from data from single time points. This allows us to infer interesting properties that govern the evolution of cancers including the mutation rate, the fitness advantage of sub-populations and to distinguish diversity generated from neutral (stochastic) processes from diversity due to natural selection (outgrowth of fitter subpopulations).
In this talk I will present a new modeling framework to describe co-existing physical and socio-economic components in interconnected smart-grids. The modeling paradigm builds on the theory of evolutionary game dynamics and bio-inspired collective de
In this seminar, we will motivate and introduce the concept of network communicability. We will give a few examples of applications of this concept to biological, social, infrastructural and engineering networked systems. Building on this concept we will show how a Euclidean geometry emerges naturally from the communicability patterns in networked complex systems. This communicability geometry characterises the spatial efficiency of networks. We will show how the communicability function allows a natural characterization of network stability and their robustness to external perturbations of the system. We will, also show how the communicability shortest paths defines routes of highest congestion in cities at rush hours. Finally, we will show that theoretical parameters derived from the communicability function determine the robustness of dynamical processes taking place on the networks, such as diffusion and synchronization.
References:
Estrada, E., Hatano, N. SIAM Review 58, 2016, 692-715 (Research Spotlight).
Estrada, E., Hatano, N., Benzi, M. Physics Reports, 514, 2012, 89-119.
Estrada, E., Higham, D.J. SIAM Review, 52, 2010, 696-714.
We all need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. In this seminar I will present some empirical evidence from human experiments carried out in a controlled laboratory setting which focus on the impact of reputation in dynamic networked interactions. People are engaged in playing pairwise repeated Prisoner's Dilemma games with their neighbours, or partners, and they are paid with real money according to their performance during the experiment. We will see whether and how the ability to make or break links in social networks fosters cooperation, paying particular attention to whether information on an individual’s actions is freely available to potential partners. Studying the role of information is relevant as complete knowledge on other people’s actions is often not available for free. We will also focus on the role of individual reputation, an indispensable tool to guide decisions about social and economic interactions with individuals otherwise unknown, and in the way this reputation is obtained in a hierarchical structure. We will show how the presence of reputation can be fundamental for achieving higher levels of cooperation in human societies. These findings point to the importance of ensuring the truthfulness of reputation for a more cooperative and fair society.
Reproduction is a defining feature of living systems. A fascinating wealth of reproductive modes is observed in nature, from unicellularity to the concerted fragmentation of multi-cellular units. However, the understanding of factors driving the evolution of these life cycles is still limited. Here, I present a model in which groups arise from the division of single cells that do not separate but stay together until the moment of group fragmentation. The model allows for all possible fragmentation modes and calculates the population growth rate of each associated life cycle. This study focuses on fragmentation modes that maximise growth rate, since these are promoted by the natural selection. The knowledge of which life cycles emerge and under which conditions give us insights into the early stages of evolution of life on Earth.
This will be a joint seminar of Complex Systems with the Institute of Applied Data Sciences.
Topology, one of the oldest branches of mathematics, provides an expressive and affordable language which is progressively pervading many areas of biology, computer science and physics.
In this context, topological data analysis (TDA) tools have emerged as able to provide insights into high-dimensional, noisy and non-linear datasets coming from very different subjects.
Here I will introduce two TDA tools, persistent homology and Mapper, and illustrate what novel insights they are yielding, with particular attention to the study of the functional, structural and genetic connectomes.
I will show how topological observables capture and distinguish variations in the mesoscopic functional organization in two case studies: i) between drug-induced altered brain states, and ii) between perceptual states and the corresponding mental images.
Moving to the structural level, I will compare the homological features of structural and functional brain networks across a large age span and highlight the presence of dynamically coordinated compensation mechanisms, suggesting that functional topology is conserved over the depleting structural substrate.
Finally, using brain gene expression data, I will briefly describe recent work on the construction of a topological genetic skeleton highlighting differences in structure and function of different genetic pathways within the brain.
One of the key aims in network science is to extract information from the structure of networks. In this talk, I will report on recent work which uses the cycles (closed walks) of a network to probe the structure and provide useful information about what is going on in a particular dataset. I explore methods to count different types of cycles efficiently, and how they relate to a more general algebraic theory of cycles in a network. I will also show how counting simple cycles allows us to evaluate concepts like social balance in a network. I will then explore the concept of centrality more closely and show how it is related to the cycle structure of a network. I will present a new centrality measure for extended parts of a network (i.e. beyond simple verticies) derived from cycle theory, and show how it can be applied to real problems.
Kinetic theory is a landmark of statistical physics and is applicable to reveal the physical Brownian motion from first principles. In this framework, the Boltzmann and Langevin equations are systematically derived from the Newtonian dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy [1,2]. In light of this success, it is natural to apply this methodology to social science beyond physics, such as to finance. In this presentation, we apply kinetic theory to financial Brownian motion [3,4] with the empirical support by detailed high-frequency data analysis of a foreign exchange (FX) market.
We first show our data analysis to identify the microscopic dynamics of high-frequency traders (HFTs). By tracking trajectories of all traders individually, we characterize the dynamics of HFTs from the viewpoint of trend-following. We then introduce a microscopic model of FX traders incorporating with the trend following law. We apply the mathematical formulation of kinetic theory to the microscopic model for coarse-graining; Boltzmann-like and Langevin-like equations are derived via a generalized BBGKY hierarchy. We perturbatively solve these equations to show the consistency between our microscopic model and real data. Our work highlights the potential power of statistical physics in understanding the financial market dynamics from their microscopic dynamics.
References
[1] S. Chapman, T. G. Cowling, The Mathematical Theory of Non-Uniform Gases, (Cambridge University Press, Cambridge, England, 1970).
[2] N. G. van Kampen, Stochastic Processes in Physics and Chemistry, 3rd ed. (Elsevier, Amsterdam, 2007).
[3] K. Kanazawa, T. Sueshige, H. Takayasu, M. Takayasu, Phys. Rev. Lett. 120, 138301 (2018).
[4] K. Kanazawa, T. Sueshige, H. Takayasu, M. Takayasu, Phys. Rev. E (in press, arXiv:1802.05993).
Here I present my onging work of estimating mutation rate per cell divisions by combining stocahstic processes, Bayesian methods and genomic sequencing data.
Human cancers usually contain hundreds of billions of cells at diagnosis. During tumour growth these cells accumulate thousand of mutations, errors in the DNA, making each tumour cell unique. This heterogeneity is a major source for evolution within single tumours, subsequent progression and possible treatment resistance. Recent technological advances such as increasingly cheaper genome sequencing allows measuring some of the heterogeneity. However, the theoretical understanding and interpretation of the available data remains mostly unclear. For example, the most basic evolutionary properties of human tumours, such as mutation and cell survival rates or tumour ages are mostly unknown. Here I will present some mathematical modelling of the underlying stochastic processes. In more detail, I will construct the distribution of mutational distances in a tumour that can be measured from multi-region sequencing. I show that these distributions can be understood as random sums of independent random variables. In combination with appropriate sequencing data and Bayesian inference based on our theoretical results some of the evolutionary parameters can be recovered for tumours of single patients.
The mean-median map [4, 2, 1, 3] was originally introduced as a map over the space of nite multisets of real numbers. It extends such a multiset by adjoining to it a new number uniquely determined by the stipulation that the mean of the extended multiset be equal to the median of the original multiset. An open conjecture states that the new numbers produced by iterating this map form a sequence which stabilises, i.e., reaches a nite limit in nitely many iterations. We study the mean-median map as a dynamical system on the space of nite multisets of univariate piecewise-ane continuous functions with rational coecients. We determine the structure of the limit function in the neighbourhood of a distinctive family of rational points. Moreover, we construct a reduced version of the map which simplies the dynamics in such neighbourhoods and allows us to extend the results of [1] by over an order of magnitude.
References
[1] F. Cellarosi, S. Munday, On two conjectures for M&m sequences, J. Di. Equa-
tions and Applications 2 (2017), 428-440.
[2] M. Chamberland, M. Martelli, The mean-median map, J. Di. Equations and
Applications, 13 (2007), 577-583.
[3] J. Hoseana, The mean-median map, MSc thesis, Queen Mary, University of
London, 2015.
[4] H. Shultz, R. Shi
ett, M&m sequences, The College Mathematics Journal, 36
(2005), 191-198.
In this talk, we will present our last results on the modelling of rumour and disease spreading in single and multilayer networks. We will introduce a general epidemic model that encompasses the rumour and disease dynamics into a single framework. The susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models will be discussed in multilayer networks. Moreover, we will introduce a model of epidemic spreading with awareness, where the disease and information are propagated in different layers with different time scales. We will show that the time scale determines whether the information awareness is beneficial or not to the disease spreading. Finally, we will show how machine learning can be used to understand the structure and dynamics of complex networks.
Much of the progress that has been made in the field of complex networks is
attributed to adopting dynamical processes as the means for studying these
networks, as well as their structure and response to external factors. In this
talk, by taking a different lens, I view complex networks as combinatorial
structures and show that this — somewhat alternative — approach brings new
opportunities for exploration. Namely, the focus is made on the sparse regime of
the configuration model, which is the maximum entropy network constrained by an
arbitrary degree distribution, and on the generalisations of this model to the
cases of directed and coloured edges (also known as the configuration multiplex
model). We study how the (multivariate) degree distribution in these networks
defines global emergent properties, such as the sizes and structure of connected
components. By applying Joyal's theory of combinatorial species, the questions
of connectivity and structure are formalised in terms of formal power series,
and unexpected link is made to stochastic processes. Then, by studying the
limiting behaviour of these processes, we derive asymptotic theory that is rich
on analytical expressions for various generalisations of the configuration
model. Furthermore, interesting connections are made between configuration model
and physical processes of different nature.
Various interacting lattice path models of polymer collapse in two dimensions demonstrate different critical behaviours, and this difference has been without a clear explanation. The collapse transition has been variously seen to be in the Duplantier–Saleur θ-point university class (specific heat cusp), the interacting trail class (specific heat divergence) or even first-order. This talk will describe new studies that elucidate the role of three body interactions in the phase diagram of polymer collapse in two dimensions.
We present a study of a delay differential equation (DDE) model for the glacial cycles of the Pleistocene climate. The model is derived from the Saltzman and
Maasch 1988 model, which is an ODE system containing a chain of first-order reactions. Feedback chains of this type limit to a discrete delay for long chains. We
approximate the chain by a delay, resulting in a scalar DDE for ice mass with fewerparameters than the original ODE model. Through bifurcation analysis under varying
the delay, we discover a previously unexplored bistable region and consider solutions in this parameter region when subjected to periodic and astronomical forcing. The
astronomical forcing is highly quasiperiodic, containing many overlapping frequencies from variations in the Earth's orbit. We find that under the astronomical forcing, the model exhibits a transition in time that resembles what is seen in paleoclimate records, known as the Mid-Pleistocene Transition. This transition is a distinct feature of the quasiperiodic forcing, as con firmed by the change in sign of the leading finite-time Lyapunov exponent. We draw connections between this transition and non-smooth saddle-node bifurcations of quasiperiodically forced 1D maps.
For fluctuating thermodynamic currents in non-equilibrium steady states, the thermodynamic uncertainty relation expresses a fundamental trade-off between precision, i.e. small fluctuations, and dissipation. Using large deviation theory, we show that this relation follows from a universal bound on current fluctuations that is valid beyond the Gaussian regime and in which only the total rate of entropy production enters. Variants and refinements of this bound hold for fluctuations on finite time scales and for Markovian networks with known topology and cycle affinities. Applied to molecular motors and heat engines, the bound on current fluctuations imposes constraints on the efficiency and power. For cyclically driven systems, a generalisation of the uncertainty relation leads to an effective rate of entropy production that can be larger than the actual one, allowing for a higher precision of currents.
Modelling the dynamics of finite populations involves intrinsic demographic noise. This is particularly important when the population is small, as it is frequently the case in biological applications, and example of this are gene circuits. At the same time populations can be subject to switching or changing environments; for example promotors may bind or unbind, or bacteria can be exposed to changing concentrations of antibiotics. How does one integrate intrinsic and extrinsic into models of population dynamics, and how does one derive coarse grained descriptions? How can simulations best be performed efficiently? In this talk I will address some of these questions. Theoretical aspects include systematic expansions in the strength of each type of noise to derive reduced models such as stochastic differential equations, or piecewise deterministic Markov processes. I will show how this can lead to peculiar features including master equations with negative “rates”. I will also discuss a number of applications, in particular in game theory, and phenotype switching.
Many biological problems, such as tumor-induced angiogenesis (the
growth of blood vessels to provide nutrients to a tumor), or signaling
pathways involved in the dysfunction of cancer (sets of molecules that
interact that turn genes on/off and ultimately determine whether a
cell lives or dies), can be modeled using differential equations.
There are many challenges with analyzing these types of mathematical
models, for example, rate constants, often referred to as parameter
values, are difficult to measure or estimate from available data.
I will present mathematical methods we have developed to enable us to
compare mathematical models with experimental data. Depending on the
type of data available, and the type of model constructed, we have
combined techniques from computational algebraic geometry and
topology, with statistics, networks and optimization to compare and
classify models without necessarily estimating parameters.
Specifically, I will introduce our methods that use computational
algebraic geometry (e.g., Gröbner bases) and computational algebraic
topology (e.g., persistent homology). I will present applications of
our methodology on datasets involving cancer. Time permitting, I will
conclude with our current work for analyzing spatio-temporal datasets
with multiple parameters using computational algebraic topology.
In this talk, we will present our ongoing activities in learning better models for inverse problems in imaging. We consider classical variational models used for inverse problems but generalise these models by introducing a large number of free model parameters. We learn the free model parameters by minimising a loss function comparing the reconstructed images obtained from the variational models with ground truth solutions from a training data base. We will also show recent results on learning "deeper" regularisers that are allowed to change their parameters in each iteration of the algorithm. We show applications to different inverse problems in imaging where we put a particular focus on joint image demosaicing and denoising.
For every random process, all measurable quantities are described
comprehensively through their probability distributions. in the ideal but rare case
they can be obtained analytically, i.e., completely. most physical
models are not accessible analytically thus one has to perform numerical
simulations. usually this means one does many independent runs and
obtains estimates of the probability distributions by the measured
histograms. since the number of repetitions is limited, maybe 10
million, correspondingly the distributions can be estimated in a range
down to probabilities like 10^-10. but what if one wants to obtain the
full distribution, in the spirit of obtaining all information?
this means one desires to get the distribution down to the rare
events, but without waiting forever by performing an almost infinite
number of simulation runs.
here, we study rare events numerically using a very general black-box
method. it is based on sampling vectors of random numbers within an
artificial finite-temperature (boltzmann) ensemble to access rare
events and large deviations for almost arbitrary equilibrium and
non-equilibrium processes. in this way, we obtain probabilities as
small as 10^-500 and smaller, hence (almost) the full distribution can
be obtained in a reasonable amount of time.
here, some applications are presented:
distribution of work performed for a critical (t=2.269)
two-dimensional ising system of size lxl=128x128 upon rapidly changing
the external magnetic field (only by obtaining the distribution over hundreds
of decades it allows to check the jarzynski and crooks
theorems which exactly relate the non-equilibrium work to the
equilibrium free energy);
distribution of perimeters and area of convex hulls of
finite-dimensional single and multiple random walks;
distribution of the height fluctuations of the kardar-parisi-zhang (kpz)
equation via a model of directed polymers in random media.
We show that rank-ordered properties of a wide range of instances encountered in the arts (visual art, music, architecture), natural sciences (biology, ecology, physics, geophysics) and social sciences (social networks, archeology, demographics) follow a two-parameter Discrete Generalized Beta Distribution (DGBD) [1]. We present several models that produce outcomes which under rank-ordering follow DGBDs: i) Expansion- modification algorithms [2], ii) Death-Birth Master Equations that lead to Langevin and Fokker-Planck equations [3], iii) Symbolic dynamics of unimodal nonlinear map families and their associated thermodynamic formalism [4]. A common feature of the models is the presence of an order-disorder conflicting dynamics. In all cases “a” is associated with long-range correlations and “b” with the presence of unusual phenomena. Furthermore the difference “D=a-b” determines transitions between different dynamical regimes such as chaos/intermittency.
[1] Universalityinrank-ordereddistributionsintheartsandsciences, G. Martínez-Mekler, R. Alvarez Martínez, M. Beltran del Rio, R. Mansilla, P. Miramontes, G. Cocho, PLoS ONE 4(3): (2009) e4791.doi:10.1371/journal.pone.0004791
[2]Order-disordertransitioninconflictingdynamicsleadingtorank-frequency generalized betadistributions, R.A´lvarez-Martínez,G.Martínez-Mekler,G.Cocho Physica A 390 (2011) 120-130
Birth and death master equation for the evolution of complex,networks, A´lvarez-Martínez,R.,Cocho,G.,Rodríguez,R.F.,Martínez-Mekler,G Physica A, 31 1 198-208(2014)
[4]Rank ordered beta distributions of nonlinear map symbolic dynamics families with a first-order transition between dynamical regimes, R. Álvarez-Martínez, G. Cocho, G. Martínez-Mekler G, Chaos, 28, 075515 (2018)
Elements composing complex systems usually interact in several different ways and as such the interaction architecture is well modelled by a network with multiple layers - a multiplex network–. However only in a few cases can such multi-layered architecture be empirically observed, as one usually only has experimental access to such structure from an aggregated projection. A fundamental challenge is thus to determine whether the hidden underlying architecture of complex systems is better modelled as a single interaction layer or results from the aggregation and interplay of multiple layers.
Assuming a prior of intralayer Markovian diffusion, in this talk I will present a method [1] that, using local information provided by a random walker navigating the aggregated network, is able possible to determine in a robust manner whether these dynamics can be more accurately represented by a single layer or they are better explained by a (hidden) multiplex structure. In the latter case, I will also provide a Bayesian method to estimate the most probable number of hidden layers and the model parameters, thereby fully reconstructing its hidden architecture. The whole methodology enables to decipher the underlying multiplex architecture of complex systems by exploiting the non- Markovian signatures on the statistics of a single random walk on
the aggregated network.
In fact, the mathematical formalism presented here extends above and beyond detection of physical layers in networked complex systems, as it provides a principled solution for the optimal decomposition and projection of complex, non-Markovian dynamics into a Markov switching combination of diffusive modes.
I will validate the proposed methodology with numerical simulations of both (i) random walks navigating hidden multiplex networks (thereby reconstructing the true hidden architecture) and (ii) Markovian and non-Markovian continuous stochastic processes (thereby reconstructing an effective multiplex decomposition where each layer accounts for a different diffusive mode). I will also state two existence theorems guaranteeing that an exact reconstruction of the dynamics in terms of these hidden jump-Markov models is always possible for arbitrary finite-order Markovian and fully non-Markovian processes. Finally, using experiments, I will apply the methodology to understand the dynamics of RNA polymerases at the single-molecule level.
[1] L. Lacasa, I.P. Mariño, J. Miguez, V. Nicosia, E. Roldan, A. Lisica, S.W. Grill, J. Gómez-Gardeñes,
Multiplex decomposition of non-Markovian dynamics and the hidden layer reconstruction
problem
Physical Review X 8, 031038 (2018)
It is widely believed that to perform cognition, it is essential for a system to "have an architecture in the form of a neural network, i.e. to represent a collection of relatively simple units coupled to each other with adjustable couplings. The main, if not the only, reason for this conviction is that the single natural cognitive system known to us, the brain, has this property. With this, understanding how the brain works is one of the greatest challenges of modern science."
The traditional way to study the brain is to explore its separate parts and to search for correlations and emergent patterns in their behavior. This approach does not satisfactorily answer some fundamental questions, such as how memories are stored, or how the data from detailed neural measurements could be arranged in a single picture explaining what the brain does. It is well appreciated that the mind is an emergent property of the brain, and it is important to find the right level for its description.
There have been much research devoted to describing and understanding the brain from the viewpoint of the dynamical systems (DS) theory. However, the focus of this research has been on the behavior of the system and was largely limited to modelling of the brain, or of the phenomena occurring in the brain.
We propose to shift the focus from the brain behavior to the ruling force behind the behavior, which in a DS is the velocity vector field. We point out that this field is a mathematical representation of the device's architecture, the result of interaction between all of the device's components, and as such represents an emergent property of the device. With this, the brain's unique feature is its architectural plasticity, i.e. a continual formation, severance, strenghtening and weakening of its inter-neuron connections, which is firmly linked to its cognitive abilities. We propose that the self-organising architectural plasticity of the brain creates a plastic self-organising velocity field, which evolves spontaneously according to some deterministic laws under the influence of sensory stimuli. Velocity fields of this type have not been known in the theory of dynamical systems, and we needed to introduce them specially to describe cognition [1].
We hypothesize that the ability to perform cognition is linked to the ability to create a self-organising velocity field evolving according to some appropriate laws, rather than with the neural-network architecture per se. With this, the plastic neural network is the means to create the required velocity field, which might not be uniqe.
To verify our hypothesis, we construct a very simple dynamical system with plastic velocity field, which is arhictecturally not a neural network, and show how it is able to perform basic cognition expected of neural networks, such as memorisation, classification and pattern recognition.
Looking at the brain through the prism of its velocity vector field offers answers to a range of questions about memory storage and pattern recognition in the brain, and delivers the sought-after link between the brain substance and the bodily behavior. At the same time, constructing various rules of self-organisation of a velocity vector field and implementing them in man-made devices could lead to artificial intelligent machines of novel types.
[1] Janson, N.B. & Marsden, C.J. Dynamical system with plastic self-organized velocity field as an alternative conceptual model of a cognitive system. Scientific Reports 7, 17007 (2017).
Many physical, biological and engineering processes can be represented mathematically by models of coupled systems with time delays. Time delays in such systems are often either hard to measure accurately, or they are changing over time, so it is more realistic to take time delays from a particular distribution rather than to assume them to be constant. In this talk, I will show how distributed time delays affect the stability of solutions in systems of coupled oscillators. Furthermore, I will present a system with distributed delays and Gaussian noise, and illustrate how to calculate the optimal path to escape from the basin of attraction of the stable steady state, as well as how the distribution of time delays influences the rate of escape away from the stable steady state. Throughout the talk, analytical calculations will be supported by numerical simulations to illustrate possible dynamical regimes and processes.
Systems with delayed interactions play a prominent role in a variety of fields, ranging from traffc and population dynamics, gene regulatory and neural networks or encrypted communications. When subjecting a semiconductor laser to reflections of its own emission, a delay results from the propagation time of the light in the external cavity. Because of its experimental accessibility and multiple applications, semiconductor lasers with delayed feedback or coupling have become one of the most studied delay systems. One of the most experimentally accessible properties to characterise these chaotic dynamics is the autocorrelation function. However, the relationship between the autocorrelation function and other nonlinear properties of the system is generally unknown. Therefore, although the autocorrelation function is often one of the key characteristics measured, it is unclear which information can be extracted from it. Here, we present a linear stochastic model with delay, that allows to analytically derive the autocorrelation function. This linear model captures fundamental properties of the experimentally obtained autocorrelation function of laser with delayed feedback, such as the shift and asymmetric broadening of the different delay echoes. Fitting this analytical autocorrelation to its experimental counterpart, we find that the model reproduces, in most dynamical regimes of the laser, the experimental data surprisingly well. Moreover, it is possible to establish a relation between the set of parameters from the linear model and dynamical properties of the semiconductor lasers, as relaxation oscillation frequency and damping rate.
Consider equations of motion that generate dispersion of an ensemble of particles. For a given dynamical system an interesting problem is not only what type of diffusion is generated by its equations of motion but also whether the resulting diffusive dynamics can be reproduced by some known stochastic model. I will discuss three examples of dynamical systems generating different types of diffusive transport: The first model is fully deterministic but non-chaotic by displaying a whole range of normal and anomalous diffusion under variation of a single control parameter [1]. The second model is a dissipative version of the paradigmatic standard map. Weakly perturbing it by noise generates subdiffusion due to particles hopping between multiple attractors [2]. The third model randomly mixes in time chaotic dynamics generating normal diffusive spreading with non-chaotic motion where all particles localize. Varying a control parameter the mixed system exhibits a transition characterised by subdiffusion. In all three cases I will show successes, failures and pitfalls if one tries to reproduce the resulting diffusive dynamics by using simple stochastic models. Joint work with all authors on the references cited below.
[1] L. Salari, L. Rondoni, C. Giberti, R. Klages, Chaos 25, 073113 (2015)
[2] C.S. Rodrigues, A.V. Chechkin, A.P.S. de Moura, C. Grebogi and R. Klages, Europhys. Lett. 108, 40002 (2014)
[3] Y.Sato, R.Klages, to be published.
The explosion in digital music information has spurred the developing of mathematical models and computational algorithms for accurate, efficient, and scalable processing of music information. Total global recorded music revenue was US$17.3b in 2017, 41% of which was digital (2018 IFPI Report). Industrial scale applications like Shazam has over 150 million active users monthly and Spotify over 140 million. With such widespread access to large digital music collections, there is substantial interest in scalable models for music processing. Optimisation concepts and methods thus play an important role in machine models of music engagement, music experience, music analysis, and music generation. In the first part of the talk, I shall show how optimisation ideas and techniques have been integrated into computer models of music representation and expressivity, and into computational solutions to music generation and structure analysis.
Advances in medical and consumer devices for measuring and recording physiological data have given rise to parallel developments in computing in cardiology. While the information sources (music and cardiac signals) share many rhythmic and other temporal similarities, the techniques of mathematical representation and computational analysis have developed independently, as have the tools for data visualization and annotation. In the second part of the talk, I shall describe recent work applying music representation and analysis techniques to electrocardiographic sequences, with applications to personalised diagnostics, cardiac-brain interactions, and disease and risk stratification. These applications represent ongoing collaborations with Professors Pier Lambiase and Peter Taggart (UCL), and Dr. Ross Hunter at the Barts Heart Centre.
About the speaker:
Elaine Chew is Professor of Digital Media in the School of Electronic Engineering and Computer Science at Queen Mary University of London. Before joining QMUL in Fall 2011, she was a tenured Associate Professor in the Viterbi School of Engineering and Thornton School of Music (joint) at the University of Southern California, where she founded the Music Computation and Cognition Laboratory and was the inaugural honoree of the Viterbi Early Career Chair. She has also held visiting appointments at Harvard (2008-2009) and Lehigh University (2000-2001), and was Affiliated Artist of Music and Theater Arts at MIT (1998-2000). She received PhD and SM degrees in Operations Research at MIT (in 2000 and 1998, respectively), a BAS in Mathematical and Computational Sciences (honors) and in Music (distinction) at Stanford (1992), and FTCL and LTCL diplomas in Piano Performance from Trinity College, London (in 1987 and 1985, respectively).
She was awarded an ERC ADG in 2018 for the project COSMOS: Computational Shaping and Modeling of Musical Structures, and is a past recipient of a 2005 Presidential Early Career Award in Science and Engineering (the highest honor conferred on young scientists/engineers by the US Government at the White House) and Faculty Early Career Development (CAREER) Award by the US National Science Foundation, and 2007/2017 Fellowships at Harvard’s Radcliffe Institute for Advanced Studies. She is an alum (Fellow) of the (US) National Academy of Science's Kavli Frontiers of Science Symposia and of the (US) National Academy of Engineering's Frontiers of Engineering Symposia for outstanding young scientists and engineers.
Her research, centering on computational analysis of music structures in performed music, performed speech, and cardiac arrhythmias, has been supported by the ERC, EPSRC, AHRC, and NSF, and featured on BBC World Service/Radio 3, Smithsonian Magazine, Philadelphia Inquirer, Wired Blog, MIT Technology Review, The Telegraph, etc.
Systems with delayed interactions play a prominent role in a variety of fields, ranging from traffc and population dynamics, gene regulatory and neural networks or encrypted communications. When subjecting a semiconductor laser to reflections of its own emission, a delay results from the propagation time of the light in the external cavity. Because of its experimental accessibility and multiple applications, semiconductor lasers with delayed feedback or coupling have become one of the most studied delay systems. One of the most experimentally accessible properties to characterise these chaotic dynamics is the autocorrelation function. However, the relationship between the autocorrelation function and other nonlinear properties of the system is generally unknown. Therefore, although the autocorrelation function is often one of the key characteristics measured, it is unclear which information can be extracted from it. Here, we present a linear stochastic model with delay, that allows to analytically derive the autocorrelation function. This linear model captures fundamental properties of the experimentally obtained autocorrelation function of laser with delayed feedback, such as the shift and asymmetric broadening of the different delay echoes. Fitting this analytical autocorrelation to its experimental counterpart, we find that the model reproduces, in most dynamical regimes of the laser, the experimental data surprisingly well. Moreover, it is possible to establish a relation between the set of parameters from the linear model and dynamical properties of the semiconductor lasers, as relaxation oscillation frequency and damping rate.
Here we discuss some exact mathematical results in percolation theory, including the triangle-triangle duality transformation, results for 4-hypergraphs, and application of Euler’s formula to study the number of clusters on a lattice and dual lattice. The latter leads to procedures to approximate the threshold to high precision efficiently, as carried out by J. Jacobsen for a variety of Archimedean lattices. The ideas crossing probabilities on open systems, going to the work of J. Cardy and of G. M. T. Watts, and wrapping probabilities on a torus, going back to Pinson, will also be discussed. These results are limited to two dimensional systems.
TBA
Temporal graphs (in which edges are active only at specified time
steps) are an increasingly important and popular model for a wide variety
of natural and social phenomena. I'll talk a bit about what's been going on
in the world of temporal graphs, and then go on to the idea of graph
modification in a temporal setting.
Motivated by a particular agricultural example, I’ll talk about the
temporal nature of livestock networks, with a quick diversion into
recognising the periodic nature of some cattle trading systems. With
bovine epidemiology in mind, I'll talk about a particular modification
problem in which we assign times to edges so as to maximise or minimise
reachability sets within a temporal graph. I'll mention an assortment of
complexity results on these problems, showing that they are hard under a
disappointingly large variety of restrictions. In particular, if edges can
be grouped into classes that must be assigned the same time, then the
problem is hard even on directed acyclic graphs when both the reachability
target and the classes of edges are of constant size, as well as on an
extremely restrictive class of trees. The situation is slightly better if
each edge is active at a unique timestep - in some very restricted cases
the problem is solvable in polynomial time. (Joint work with Kitty Meeks.)
Certain classes of higher-order networks can be interpreted as discrete geometries. This creates a relation with approaches to non-perturbative quantum gravity, where one also studies ensembles of geometries of this type. In the framework of Causal Dynamical Triangulations (CDT) the regularised Feynman path integral over curved space-times takes the form of a sum over simplicial geometries (triangulated spaces) of fixed dimension and topology. One key challenge of quantum gravity is to characterise the geometric properties of the resulting ``quantum geometry" in terms of a set of suitable observables. Well-known examples of observables are the Hausdorff and spectral dimension. After a short introduction of central concepts in CDT, I will describe recent attempts to study the possible emergence of global symmetries in quantum geometries. This involves the analysis of the spectra of an operator related to the discrete 1-Laplacian, whose eigenvectors are the discrete analogues of Killing vector fields in the continuum.
A central problem in uncertainty quantification is how to characterize the impact that our incomplete knowledge about models has on the predictions we make from them. This question naturally lends itself to a probabilistic formulation, by making the unknown model parameters random with given statistics. Here this approach is used in concert with tools from large deviation theory (LDT) and optimal control to estimate the probability that some observables in a dynamical system go above a large threshold after some time, given the prior statistical information about the system's parameters and its initial conditions. We use this approach to quantify the likelihood of extreme surface elevation events for deep sea waves, so-called rogue waves, and compare the results to experimental measurements. We show that our approach offers a unified description of rogue wave events in the one-dimensional case, covering a vast range of paramters. In particular, this includes both the predominantly linear regime as well as the highly nonlinear regime as limiting cases, and is able to predict the experimental data regardless of the strength of the nonlinearity.
Using extensive Monte Carlo simulations, we investigate the surface adsorption of self-avoiding trails on the triangular lattice with two- and three-body on-site monomer-monomer interactions. In the parameter space of two-body, three-body, and surface interaction strengths, the phase diagram displays four phases: swollen (coil), globule, crystal, and adsorbed. For small values of the surface interaction, we confirm the presence of swollen, globule, and crystal bulk phases. For sufficiently large values of the surface interaction, the system is in an adsorbed state, and the adsorption transition can be continuous or discontinuous, depending on the bulk phase. As such, the phase diagram contains a rich phase structure with transition surfaces that meet in multicritical lines joining in a single special multicritical point. The adsorbed phase displays two distinct regions with different characteristics, dominated by either single or double layer adsorbed ground states. Interestingly, we find that there is no finite-temperature phase transition between these two regions though rather a smooth crossover.
The Paris conference 2015 set a path to limit climate change to "well below 2?C". To reach this goal, integrating renewable energy sources into the electrical power grid is essential but poses an enormous challenge to the existing system, demanding new conceptional approaches. In this talk, I will introduce basics of the power grid operation and outline some pressing challenges to the power grid. In particular, I present our latest research on power grid fluctuations and how they threaten robust grid operation. For our analysis, we collected frequency recordings from power grids in North America, Europe and Japan, noticing significant deviations from Gaussianity. We developed a coarse framework to analytically characterize the impact of arbitrary noise distributions as well as a superstatistical approach. This already gives an oppurtunity to plan future grids. Finally, I will outline my recently started Marie-Curie project DAMOSET, which focusses on building up an open data base of measurements to deepen our understanding.
In this talk, which should be accessible to a general audience, I will discuss the notion of epsilon-entropy introduced by Kolmogorov in the 1950s, as a measure of the complexity of compact sets in a metric space.
I will then discuss a new proof for a problem originally raised by Kolmogorov on the precise asymptotics of the epsilon-entropy of compact sets of holomorphic functions which relies on ideas from operator theory and potential theory.
This is joint work with Stephanie Nivoche (Nice).
The affinity dimension, introduced by Falconer in the 1980s, is the `typical' value of the Hausdorff dimension of a self-affine set. In 2014, Feng and Shmerkin proved that the affinity dimension is continuous as a function of the maps defining the self-affine set, thus resolving a long-standing open problem in the fractal geometry community. In this talk we will discuss stronger regularity properties of the affinity dimension in some special cases. This is based on recent work with Ian Morris.
The classical Lorenz flow, and any flow which is close to it in the C^{2}-topology, satisfies a Central Limit Theorem (CLT). We first prove statistical stability and then prove that the variance in the CLT varies continuously for this family of flows and for general geometric Lorenz flows, including extended Lorenz models where certain stable foliations have weaker regularity properties.
This is a joint work with I. Melbourne and Marks Ruziboev.
TBA
Complex dynamical systems driven by the unravelling of information can be modelled effectively by treating the underlying flow of information as the model input. Complicated dynamical behaviour of the system is then derived as an output. Such an information-based approach is in sharp contrast to the conventional mathematical modelling of information-driven systems whereby one attempts to come up with essentially ad hoc models for the outputs. In this talk, dynamics of electoral competition is modelled by the specification of the flow of information relevant to election. The seemingly random evolution of the election poll statistics are then derived as model outputs, which in turn are used to study election prediction, impact of disinformation, and the optimal strategy for information management in an election campaign.
TBA
TBA
TBA
TBA
TBA
TBA
TBA
TBA
TBA
TBA
The meeting will start at 2:15pm and the schedule is as follows.
2:15pm Chris Good (Birmingham)
Shifts of finite type as fundamental objects in the theory of shadowing
3:30pm Polina Vytnova (Warwick)
Dimension of Bernoulli convolutions: computer assisted estimates
5:00pm Mike Todd (St Andrews)
Escape of entropy
Abstracts are available at the workshop webpage.
We give a review of some recent results on extreme value theory applied to dynamical systems by using the spectral approach on transfer operator. This in particular will allow to treat : high dimensional cases; open systems with holes and to give a precise computation of the extremal index.