### Provisional Schedule

- The short course will run from 10.15 until 17.45 on Sunday 15 July.
- The Draft Programme is now available.
- The poster session is on Tuesday evening, but we plan to leave posters on display afterwards.

### Short course Sunday 15 July (9.30-17.00)

The conference will open with a short course * Introduction to Causal Modelling and Inference* given by Vanessa Didelez, introducing the central concepts and models used for inference on causal effects from observational, or imperfectly randomised, data. The course will be suitable for researchers with a solid background in statistics and probability, but no prior exposure to causal inference or graphical models is required. The course aims to provide a basis so that participants are able to read a variety of topical research papers in the area of causal inference. Throughout small exercises will be discussed; these will be pen-and-paper based, so participants do not need to bring a laptop. Software packages for the various methods will be pointed out. There is a modest extra charge (including refreshments) for short course participation. The course outline is as follows.

- Basic concepts of causal inference such as the formalisation of causality in terms of effects of interventions using Pearl's do-operator and potential outcomes; measures of causal effect; causal DAGs; confounding; identifiability (e.g. back-door criterion)
- Basic methods of adjustment and modelling: marginal structural models (inverse probability weighting, double robustness); propensity scores / matching; sequential treatments and models for longitudinal data (g-computation and g-estimation).
- Instrumental variables assumptions and methods - these are useful in the presence of unobserved confounding. Keywords: instrumental variable inequalities; two-stage least squares.
- Time permitting, we will finally briefly address the basic ideas and algorithms of causal discovery, especially those used in the context of high-dimensional data. Central concepts: causal Markov condition; causal sufficiency and faithfulness; constraint based algorithms (PC- and FCI) versus score-based algorithms.

### Keynote Speakers

* Covariance Functions, the equivalent of covariance matrices for functional data, might not the first data object that usually comes to mind. However, in a number of applications, from neuroimaging to linguistics, they are the underlying statistical object of interest. Their analysis is not completely straight-forward though, as they lie in non-Euclidean spaces, which makes even simple statistical analysis somewhat more complex. In this talk, we will demonstrate how we can use different approaches from statistics and applied mathematics to carry out statistical analysis of covariance functions. We will look at covariances which change over space, and also covariances which only result from observations of an inverse problem. The talk will be illustrated with examples from the study of dialects and languages and from the investigation of brain connectivity.
[Joint work with John Coleman, Eardi Lila, Davide Pigoli and Shahin Tavakoli]*

* Data that can be arranged in an array are common in statistics and then models often have a row-column-depth-... structure. In such cases, data sets and models can be large, and can present computational problems even for modern computers, particularly when smoothing is carried out in a Generalized Linear Model (GLM) framework. A Generalized Linear Array Model or GLAM is a fast low-footprint method of computation for such data sets and models. The GLAM approach and algorithms were first described at the IWSM in Florence in 2004. We describe GLAM and then mention some of its many applications since that time. The IWSM conferences have played a key role in GLAM's development. We tell the story of GLAM through these workshops, starting with the 1999 conference in Graz and finishing in Bristol in 2018.*

* Analysis of credibility is a reverse-Bayes
technique that has been proposed by Matthews (2001a,b, 2018) to
overcome some of the shortcomings of significance tests. A
significant result is deemed credible if current knowledge about the
effect size is in conflict with a sceptical prior derived from the
data that makes the effect non-significant. In this talk I describe
this approach and propose to use Bayesian predictive tail probabilities
to quantify the evidence for credibility. This gives rise to a
p-value for credibility, p_C, taking into account both the
internal and the external evidence for an effect. The assessment of
intrinsic credibility is based on the internal data only and leads to
a new threshold for ordinary significance that is remarkably close to
the recently proposed 0.005 level. Finally, a p-value for intrinsic
credibility, p_IC, is proposed that is a simple function of the ordinary
p-value for significance and has a direct frequentist interpretation
in terms of the replication probability that a future study under
identical conditions will give an estimated effect in the same
direction as the first study.
*

* To model recurrent interaction events in continuous time, an extension of the stochastic block model is proposed where every individual belongs to a latent group and interactions between two individuals follow a conditional inhomogeneous Poisson process with intensity driven by the individuals' latent groups. The model is shown to be identifiable and its estimation is based on a semiparametric variational expectation-maximization algorithm. Two versions of the method are developed, using either a nonparametric histogram approach (with an adaptive choice of the partition size) or kernel intensity estimators. The number of latent groups can be selected by an integrated classification likelihood criterion. Finally, we demonstrate the performance of our procedure on synthetic experiments, analyse two datasets to illustrate the utility of our approach and comment on competing methods. *

* There has been a remarkable surge of activity lately on the topic of non-parametric estimation of transition probablities in non-Markov multi-state models. This work summarizes the most flexible of recent approaches, the landmark Aalen-Johansen estimator, and discusses ongoing work on testing the Markov assumption, inspired by the landmark Aalen-Johansen estimator. The landmark Aalen-Johansen estimator is based on a subsample of the complete data, containing all subjects present in a specific state at a specific time point. Because it is based on reduced data, it is less efficient than the Aalen-Johansen estimator, which might still perform reasonably well in situations where the multi-state model shows only mild deviations from Markovianity. In the final part we will explore ways of combining the efficiency of the Aalen-Johansen approach with the robustness of the landmark Aalen-Johansen approach.
*

- Sunday 15 July: informal get together in a pub.
- Monday 16 July: welcome reception.
- Wednesday 18 July: afternoon excursion. Choice of:
- Boat trip to SS Great Britain (with evening meal).
- Tour of Moor Beer Brewery (includes beer).
- Guided walking tour of historic Bath .

- Thursday 19 July: Conference dinner in the Bristol Museum (included in registration fee).