Directeur.e(s) de recherche
Josée Desharnais
Pascal Germain
Start date
Title of the research project
Sparse Decision Trees based on logic for an increased interpretability.
Description

Interpretability of Artificial Intelligence, that is the capacity of an expert to understand why a prediction is made, is of great importance in health analysis. Firstly, because it matters to understand why a decision is made by an algorithm when it has such impact on a person’s life. Moreover, in research, interpretable algorithms are useful because they often unveil new investigation path. 

This study aims to combine two supervised machine learning algorithms to optimize both interpretability and performance, for instance, with mathematical logic tools. This new algorithm intends to help better predictions by lightly increasing model complexity while preserving high interpretability. 

This algorithm is developed to analyze fat data, which are data with a lot of characteristics (features) but with few samples (observations). This type of data is recurrent in health data, mainly in genomics, metagenomics and metabolomics data, which are all state of the art in medical analysis. More precisely, we are interested in problems such as antibiotic resistance or long corona virus disease (COVID-19). 
 

Discover

Featured project

Prostate cancer is the second most frequent cancer and the fifth leading cause of cancer death among men. To improve patient outcomes, treatment must be personalized based on accurate prognosis. Nomograms already exist to identify patients at low risk for recurrence based on preoperative clinical information, but these tools do not use patients’ medical images.

Read more