Directeur.e(s) de recherche
Josée Desharnais
Pascal Germain
Start date
Title of the research project
Sparse Decision Trees based on logic for an increased interpretability.
Description

Interpretability of Artificial Intelligence, that is the capacity of an expert to understand why a prediction is made, is of great importance in health analysis. Firstly, because it matters to understand why a decision is made by an algorithm when it has such impact on a person’s life. Moreover, in research, interpretable algorithms are useful because they often unveil new investigation path. 

This study aims to combine two supervised machine learning algorithms to optimize both interpretability and performance, for instance, with mathematical logic tools. This new algorithm intends to help better predictions by lightly increasing model complexity while preserving high interpretability. 

This algorithm is developed to analyze fat data, which are data with a lot of characteristics (features) but with few samples (observations). This type of data is recurrent in health data, mainly in genomics, metagenomics and metabolomics data, which are all state of the art in medical analysis. More precisely, we are interested in problems such as antibiotic resistance or long corona virus disease (COVID-19). 
 

Discover

Featured project

Delirium is a condition that, when left unmanaged, is associated with increased mortality and longer hospitalization of patients in intensive care; therefore, its detection should be an integral part of care. It is characterized by confusion, anxiety and reduced alertness. It is estimated that 75% of delirium cases are not detected on admission to hospital. Detecting such an acute condition requires frequent monitoring of participants, which is labor intensive and requires expertise.

Read more