Data sharing is often limited by privacy issues. This is very common in particular for health datasets, given the inherent sensitivity of this type of data. When sharing of the original dataset is not possible, one method that can be used is to generate a synthetic dataset, which contains as much statistical information as possible from the original dataset, but which provides data on false individuals in order to protect the confidentiality of respondents. One way to ensure that these synthetic data effectively protect respondents is to use differential confidentiality, a rigorous measure of disclosure risk.
This project is interested in how to analyze these synthetic datasets to obtain valid statistical results, as traditional methods of inference need to be modified to account for the variability added by the generation of the synthetic dataset.
RHHDS students, coming mainly from the natural sciences and engineering sectors, are also trained in ethical, legal and social implications of handling and analysing sensitive data.