Directeur.e(s) de recherche
Josée Desharnais
Pascal Germain
Start date
Title of the research project
Sparse Decision Trees based on logic for an increased interpretability.
Description

Interpretability of Artificial Intelligence, that is the capacity of an expert to understand why a prediction is made, is of great importance in health analysis. Firstly, because it matters to understand why a decision is made by an algorithm when it has such impact on a person’s life. Moreover, in research, interpretable algorithms are useful because they often unveil new investigation path. 

This study aims to combine two supervised machine learning algorithms to optimize both interpretability and performance, for instance, with mathematical logic tools. This new algorithm intends to help better predictions by lightly increasing model complexity while preserving high interpretability. 

This algorithm is developed to analyze fat data, which are data with a lot of characteristics (features) but with few samples (observations). This type of data is recurrent in health data, mainly in genomics, metagenomics and metabolomics data, which are all state of the art in medical analysis. More precisely, we are interested in problems such as antibiotic resistance or long corona virus disease (COVID-19). 
 

Discover

Featured project

Radiotherapy treatments currently used in the clinical field are rarely modified. They generally consist of a global therapy of 50 grays, fractionated in five treatments of two grays every week for five weeks.
Thus, it could be worthwhile to develop a numeric tool, based on mathematical models found in the literature, in order to compare different types of treatment without having to test them on real tissues. Several parameters are known to alter the tissue response after irradiation including oxygen

Read more