Attention, lieu exceptionnel !
Cet exposé sera donné Amphitéâtre 25 (entrée face à la tour 25, niveau dalle Jussieu, Sorbonne Université, 4 place Jussieu, Paris 5ème) ; il sera retransmis simultanément par Zoom.
Il sera donné dans le cadre de la huitième édition des Leçons Jacques-Louis Lions, qui comprendront également un mini-cours intitulé
Ensemble Kalman filter : Algorithms, analysis and applications
qui sera donné les mardi 12, mercredi 13 et jeudi 14 décembre 2023, voir ci-dessous la présentation des Leçons Jacques-Louis Lions 2023, ainsi que la page web https://www.ljll.fr/event/lecons-jacques-louis-lions-2023-andrew-stuart/
Andrew Stuart (Institut de technologie de Californie)
Neural networks have shown great success at learning function approximators between spaces X and Y, in the setting where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification) ; the neural networks learn the approximator from N data pairs (x_n, y_n). In many problems arising in physics it is desirable to learn maps between spaces of functions X and Y ; this may be either for the purposes of scientific discovery, or to provide cheap surrogate models which accelerate computations. New ideas are needed to successfully address this learning problem in a scalable, efficient manner.
In this talk I will overview the methods that have been introduced in this area and describe theoretical results underpinning the emerging methodologies. Illustrations will be given from a variety of PDE-based problems including learning the solution operator for dissipative PDEs, learning the homogenization operator in various settings, and learning the smoothing operator in data assimilation.