Our colloquium takes place on the first Tuesday of each month from 15:30 to 16:30, usually in room A709.
A renowned expert (being an excellent speaker as well) visits us for an afternoon and gives a panorama of one of her research areas. The talk is meant to be accessible to all members of the lab, including PhD students in analysis, game theory, probability and statistics. Ideally, it should start gently with an historical background on the problem and an overview of the main questions and applications, keeping a non technical style during at least the first half of the talk. Of course it is also nice to have a part with more mathematical details: the most appreciated colloquia were those in which the speaker succeeded to develop a nice technical idea or an elegant argument that everyone should know.
Food and drinks are served after the event, usually in Espace 7!
Date: Tuesday, April 1st 2025 (15:30-16:30, room A709)
Speaker: (Université Côte d'Azur)
Title: Kalikow decomposition for the study of neuronal networks: simulation and learning
Abstract: Kalikow decomposition is a decomposition of stochastic processes (usually finite state discrete time processes but also more recently point processes) that consists in picking at random a finite neighborhood in the past and then make a transition in a Markov manner. This kind of approach has been used for many years to prove existence of some processes, especially their stationary distribution. In particular, it allows to prove the existence of processes that model infinite neuronal networks, such as Hawkes like processes or Galvès-Löcherbach processes. But beyond mere existence, this decomposition is a wonderful tool to simulate such network, as an open physical system, that from a computational point of view could be competitive with the most performant brain simulations. This decomposition is also a source of inspiration to understand how local rules at each neuron can make the whole network learn.
Figure: Simulation of neuron 0 activity by Kalikow decomposition. Neuron 0 is immersed in an infinite Z² complete graph with Gaussian weights. In a) number of requests and time spent on the other neurons to simulate neuron 0 for 50 s. On the right, zoom on the repartitions of the requests in time for the 8 neurons surrounding 0.