Directeur.e(s) de recherche
Josée Desharnais
Pascal Germain
Start date
Title of the research project
Sparse Decision Trees based on logic for an increased interpretability.
Description

Interpretability of Artificial Intelligence, that is the capacity of an expert to understand why a prediction is made, is of great importance in health analysis. Firstly, because it matters to understand why a decision is made by an algorithm when it has such impact on a person’s life. Moreover, in research, interpretable algorithms are useful because they often unveil new investigation path. 

This study aims to combine two supervised machine learning algorithms to optimize both interpretability and performance, for instance, with mathematical logic tools. This new algorithm intends to help better predictions by lightly increasing model complexity while preserving high interpretability. 

This algorithm is developed to analyze fat data, which are data with a lot of characteristics (features) but with few samples (observations). This type of data is recurrent in health data, mainly in genomics, metagenomics and metabolomics data, which are all state of the art in medical analysis. More precisely, we are interested in problems such as antibiotic resistance or long corona virus disease (COVID-19). 
 

Discover

Featured project

This research project is based on the analysis of massive data on the NOL index and other intraoperative clinical parameters used by anesthesiologists during surgery. These parameters help them make analgesic treatment decisions in a non-communicating patient under general anesthesia and in whom it is impossible to assess pain and analgesic needs by standard questionnaires performed on awake patients. 
First, the objective is to interpret the values of this index in relation to the decisions made by the clinician. 

Read more