Invitados/invitadas

Geert Verbeke

How to correct for baseline covariates in longitudinal clinical trials?

In clinical trials, mixed models are becoming more popular for the analysis of longitudinal data. The main motivation is often expected dropout which can easily be handled through the analysis of the longitudinal trajectories. In many situations, analyses are corrected for baseline covariates such as study site or stratification variables. Key questions are then how to perform a longitudinal analysis correcting for baseline covariates, and how sensitive are the results with respect to choices made and models used ? In this presentation, we will first present and compare a number of techniques available to correct for baseline covariates within the context of the linear mixed model for continuous outcomes. Second, we will study the sensitivity of the various techniques in case the baseline correction is based on a wrong model or does not include important covariates. Finally, our findings will be used to formulate some general guidelines relevant in a clinical trial context. All findings and results will be illustrated extensively using data from a real clinical trial.


Christel Faes

Modeling the COVID-19 epidemic in Belgium to inform policy makers

Belgium has been hit particularly hard by the coronavirus placing the country near the top in international rankings when looking at the number of confirmed cases per 100,000 and the number of deaths per million. Belgium accounted for more than half a million confirmed cases and over 17,000 SARS-CoV-2 confirmed and suspected deaths in 2020. Belgium’s location at the centre of Europe, high international mobility, high population density, high average household size and an older population structure combined with a relatively high mixing behaviour increases transmission potential. Short-term predictions were used to help local and national governments in decision-making on interventions during the outbreak and preserving the hospital capacity. Information on local mobility, absenteeism, testing strategy and GP consultations are used in the prediction model, using distributed lag non-linear models. Spatio-temporal trends are tracked to raise alarms when growth rate in hospitalizations and cases change. Mathematical modelling was used to inform policy makers on the possible impact of restriction measures. Some highlights of the modelling exercises will be presented.


Kerrie Mengersen

Sloppy models: unveiling parameter uncertainty in mathematical models

In this presentation, I will discuss a Bayesian approach to assessing the sensitivity of model outputs to changes in parameter values in mathematical models, constrained by the combination of prior beliefs and data. The approach identifies stiff parameter combinations that strongly affect the quality of the model-data fit while simultaneously revealing which of these key parameter combinations are informed primarily from the data or are also substantively influenced by the priors. These stiff parameter combinations can uncover controlling mechanisms underlying the system being modeled and guide future experiments for improved parameter inference.

The focus of the discussion will be on the very common context in complex systems where the amount and quality of data are low compared to the number of model parameters to be collectively estimated. The approach will be illustrated with applications in biochemistry, ecology, and cardiac electrophysiology.

This work is joint with Gloria Monsalve-Bravo (lead author), Brodie Lawson, Christopher Drovandi, Kevin Burrage, Kevin Brown, Christopher Baker, Sarah Vollert, Eve McDonald-Madden and Matthew Adams. The full paper is available as an arXiv preprint arXiv:2203.15184


Pere Puig

I've been irradiated!! What is the total amount of radiation I've received?

In the event of a radiation accident, biological dosimetry is critical for determining the radiation dose received by an exposed individual in a timely way. The dose is estimated by calculating the amount of damage caused by radiation at the cellular level, such as by counting the number of chromosome aberrations like dicentrics micronuclei, or translocations. The theory of count data distributions is critical for achieving this goal. In this talk, we will introduce the standard statistical methodology for dose estimation described in the International Atomic Energy Agency's manual (IAEA, 2011), as well as summarise recent research led by our team. We will present models based on compound Poisson processes that are suitable for describing high-LET radiation exposures such as those seen in the Fukushima accident, zero-inflated and mixed Poisson models for partial and heterogeneous exposures, and weighted Poisson models for integrating low and high doses.


Volver arriba