Nathanaël Fijalkow
I am a computer scientist working on program synthesis, games, and automata.
program synthesis
games
automata
Get in touch nathanael.fijalkow@gmail.com
The conception of computer programs is a complicated, costly, and error-prone task. Program synthesis is an ideal where the program is automatically generated from its specification. I am particularly interested in inductive synthesis, where the specification consists of a set of examples.
Representative work in this topic:
Scaling Neural Program Synthesis with Distribution-based Search. Nathanaël Fijalkow and Guillaume Lagarde and Théo Matricon and Kevin E. Ellis and Pierre Ohlmann and Akarsh Potta. AAAI Conference on Artificial Intelligence, AAAI. 2022
Reactive synthesis is the special case of program synthesis where the program takes actions over time in a partially controllable environment. I focus on temporal synthesis, where the specification is given by a logical formula in linear temporal logic (LTL) and its extensions.
Representative work in this topic:
Assume-Guarantee Synthesis for Prompt Linear Temporal Logic. Nathanaël Fijalkow and Bastien Maubert and Aniello Murano and Moshe Y. Vardi. International Joint Conference on Artificial Intelligence, IJCAI. 2020
Games on graphs is at the intersection of several fields: verification (model-checking games such as parity games), logic and model theory (Ehrenfeucht–Fraïssé games), automata theory (emptiness and acceptance games), reinforcement learning (Markov decision processes), and optimisation (mean payoff and discounted games).
Representative work in this topic:
The Theory of Universal Graphs for Infinite Duration Games. Thomas Colcombet and Nathanaël Fijalkow and Pawel Gawrychowski and Pierre Ohlmann. Logical Methods in Computer Science, LMCS. 2022
Markovian models are stochastic models with memoryless dynamics. The distinction with probabilistic automata is that Markovian models such as Markov decision processes are fully observable.
Representative work in this topic:
Controlling a Random Population. Thomas Colcombet and Nathanaël Fijalkow and Pierre Ohlmann. 2021
A dynamical system follows the evolution of a point through repeated applications of a function; the special case of linear dynamical systems is concerned with linear functions, i.e. multiplication by a matrix. Their algorithmic study is deeply intertwined with deep insights from algebraic number theory and geometry. I am particularly interested in invariants for linear dynamical systems, and in related control problems.
Representative work in this topic:
Complete Semialgebraic Invariant Synthesis for the Kannan-Lipton Orbit Problem. Nathanaël Fijalkow and Pierre Ohlmann and Joël Ouaknine and Amaury Pouly and James Worrell. 2019
The study of probabilistic automata, in particular algorithms for learning and controlling them, has many applications, including program verification, natural language processing, modelling of biological systems, and machine learning.
Representative work in this topic:
Undecidability results for probabilistic automata. Nathanaël Fijalkow. 2017
Short bio
I am a junior researcher at CNRS in LaBRI, Bordeaux (chargé de recherche) and head of the Synthesis team. I was visiting University of Warsaw for the academic year 2022 -- 2023, and until 2022 I was a research fellow of The Alan Turing Institute of data science and artificial intelligence in London.
There are many open positions in the PEPR IA Project SAIF: Safe AI through Formal Methods: Please get in touch!
Games on Graphs
The Games on Graphs book is online (and free)! It's a collaborative textbook on Games on Graphs: 490 pages, 13 chapters, 17 authors, and even more monkeys. A printed version will be published in 2024 by Cambridge University Press!
Chat TGV
What would happen if Chat TGV could reason, cite, justify?
Preprints
Theoretical foundations for programmatic reinforcement learning
Programmatic Reinforcement Learning studies representations of policies as programs, meaning involving higher order constructs such as control loops. The goal of this paper is to initiate a theoretical study of programmatic RL.
Learning temporal formulas from examples is hard
Journal submission -- (almost) all you every wanted to know about the complexity of learning LTL formulas from examples
Recent publications
For recent slides and lecture notes see below
CAV 2024: LTL learning on GPUs
This paper constructs an algorithm for learning LTL leveraging the power of GPUs achieving major improvements over the state of the art!
La Recherche (avril 2024): L’IA dans la synthèse de programmes
In French: popularisation article about AI and program synthesis, published in La Recherche
TheoretiCS 2024: From Muller to Parity and Rabin Automata: Optimal Transformations Preserving (History-)Determinism
This journal paper constructs optimal transformations between automata over infinite words
JOSS 2024: Scarlet: Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic
This software, published in the Journal of Open Source Software (JOSS), is a state of the art tool for learning (fragments of) LTL
LMCS 2024: Playing Safe, Ten Years Later
This journal paper, published 10 years after the conference version, gives an updated view on this now widely extended result characterising the memory requirements for topologically closed objectives.
VMCAI 2024: Synthesizing Efficiently Monitorable Formulas in Metric Temporal Logic
This paper shows how to push LTL learning to a quantitative setting, learning Metric Temporal formulas
News
Launch of the Synthesis team
We started a new research team in LaBRI called "Synthesis", led by Anca Muscholl and myself!
Tutorial on Machine Learning guided Program Synthesis
I gave a three-hour tutorial at POPL (Symposium on Principles of Programming Languages) on 15 January 2024.
Scroll down this page to find the slides
Workshop on Trustworthy AI
I am coorganising a workshop on the Theoretical Aspects of Trustworthy AI, to be held at the Simons Institute for the Theory of Computing in Berkeley in 2025.
Collaboration project between Bordeaux, Paris, and Warsaw
The CNRS International Research Project (IRP) project Le Trójkąt between LaBRI, IRIF, and University of Warsaw, Poland will run for five years (Jan 2024 -- Dec. 2028).
Launch of the LLM4Code INRIA Challenge
Very excited to be part of the LLM4Code INRIA Challenge! More coming soon
PEPR IA SAIF
I am part of the PEPR IA SAIF: Safe AI through Formal Methods, starting in Sept 2023. We have many open positions in LaBRI, please get in touch!
Program Committees
AAAI 2024
AAAI 2024, the International Conference on Artificial Intelligence
VMCAI 2024
VMCAI 2024, the International Conference on Verification, Model Checking, and Abstract Interpretation
DAV 2023
DAV 2023, the International Workshop on Deep Learning-aided Verification
QEST 2023
QEST 2023, the International Conference on Quantitative Evaluation of SysTems
IJCAI 2023
IJCAI 2023, the International Joint Conference on Artificial Intelligence
Research Blog
Collection of research notes, short introductions to recent papers, classical results, open problems
Value iteration for parity games
Upper and lower bounds for universal trees
The interplay between parity games and universal trees
Proud of (non-exhaustive list)
Théo Matricon
Program synthesis, programmatic reinforcement learning, restarts
Rémi Morvan
Logic and automata theory, database theory, knowledge clustering
Gabriel Bathie
Regular property testing, Rust, Text algorithms
(former) PhD students who broke free
- Pierre Ohlmann, now CNRS in Marseille
- Ritam Raha, now postdoc in Max-Planck Institute, Germany
- Antonio Casares, now postdoc in Warsaw, Poland
Slides and lecture notes
LLM from scratch
This is a notebook created for a course I taught where the goal was to code a whole Transformer from scratch.
Tutorial on Machine Learning Meets Program Synthesis
I gave different versions of this tutorial: POPL 2024, FM 2023, ECAI 2020
Seminar talk on LTL learning
This version was given in GREYC, Caen, France on 16/01/2024. Many previous versions were given in many different places...
Course on Infinite duration games on graphs
12 hours course, taught at MPRI since 2019
Course on Reinforcement Learning
Taught amongst others in ENSEIRB (engineering school) since 2020, in University of Bordeaux, and at the Alan Turing Institute as a research school.
Habilitation defense
I defended my habilitation on 11 Feb 2022
Presentation of PEPR IA SAIF
This is an informal presentation of the planned activities of PEPR IA SAIF in LaBRI