François Laviolette is a full professor at Department of Computer Science and Software Engineering at Université Laval, director of the Big Data Research Center (BDRC) at Université Laval, holder of Canadian Institute for Advanced Research (CIFAR-AI) Chair on Interpretable Machine Learning in Artificial Intelligence (2020-2025), holder of Canadian industrial NSERC chair, Machine Learning for Insurance (2018-2023), member of the scientific committees of the PULSAR project, the VALERIA platform and the Intelligence and Data Institute (IID). At the national and international level, he is an associate member of the MILA Institute, member of the artificial intelligence (IA)/health committee of the Fonds de Recherche du Québec (FRQ), the scientific committee of the DATA AI Institute in France and the AI expert committee of the Observatoire international sur les impacts sociétaux de l’IA et du numérique (OBVIA) at Université Laval.
In 1984 he obtained a bachelor's degree in mathematics, in 1987 a master's degree and in 1997 a Ph.D. degree in mathematics from the University of Montreal.
His research interests are focused on artificial intelligence especially machine learning, learning theory, interpretable AI, graph theory, automated verification and bioinformatics.
Professor François Laviolette is a leader in PAC-Bayesian theory, a branch of learning theory that provides a better understanding of machine learning algorithms and to design new ones. He is interested, among others, in those that solve new types of learning problems, especially those related to genomics, proteomics, drug discovery, etc. He is also interested in making artificial intelligences interpretable in order to better integrate systems where humans are in the decision loop.
With his expertise Professor François Laviolette plays a significant role in the realization of several multidisciplinary projectsin the Big Data Research Center (BDRC) in insurance, health, bioinformatics and life science, ethics and social acceptability, ... Recently, he focused on innovation in the aerospace industry by co-leading an international project (DEpendable & Explainable Learning) in collaboration with partners from the academic research community and industry with a significant national and international budget ($7.5M and $40M respectively). This project aims to use the scientific basis for what should be a certifiable AI when embedded in a critical system.
Google Scholar