Directeur.e(s) de recherche
Josée Desharnais
Pascal Germain
Start date
Title of the research project
Sparse Decision Trees based on logic for an increased interpretability.
Description

Interpretability of Artificial Intelligence, that is the capacity of an expert to understand why a prediction is made, is of great importance in health analysis. Firstly, because it matters to understand why a decision is made by an algorithm when it has such impact on a person’s life. Moreover, in research, interpretable algorithms are useful because they often unveil new investigation path. 

This study aims to combine two supervised machine learning algorithms to optimize both interpretability and performance, for instance, with mathematical logic tools. This new algorithm intends to help better predictions by lightly increasing model complexity while preserving high interpretability. 

This algorithm is developed to analyze fat data, which are data with a lot of characteristics (features) but with few samples (observations). This type of data is recurrent in health data, mainly in genomics, metagenomics and metabolomics data, which are all state of the art in medical analysis. More precisely, we are interested in problems such as antibiotic resistance or long corona virus disease (COVID-19). 
 

Discover

Featured project

The project consists in determining and exploring the possibilities offered by dynamic dashboards in a medical context as well as the associated data management structures. The project therefore considers several aspects of data management. In this sense, the considerations related to DICOM data transfers as well as different approaches to their management and conservation are considered. In addition, the dashboards will be designed to ensure an effective, clear and concise presentation with recognized visualization tools.

Read more