Schedule and Abstracts

Thursday 27th July

11am-5pm

Coffee 11am-11.30am

11.30am-1pm.

11.30 am Steve Gilmour (King's College London)
Optimal two-level designs robust to model uncertainty
(Slides)

12.pm R. A. Bailey (St Andrews)

Valid restricted randomization for small experiments
(Slides)

12.30 pm Hugo Maruri-Aguilar (QMUL)
Designs for Computer Experiments

Lunch 1pm-2pm

2pm-3.30pm.

2pm Antony Overstall (Southampton)
Gibbs Optimal Design of Experiments
(Slides)

2.30pm Tim Waite (Manchester)
Replication in random translation designs

3 pm Olga Egorova (King's College London)
Multi-objective optimal planning of a split-plot experiment: a pharmaceutical application



Tea 3.30pm-4pm

4pm-5pm

4pm Robin Mitra (UCL)
An integrated approach to test for missingness not at random 

4.30pm Dasha Semochkina (University of Southampton)
Optimal designs for nonlinear state-space models with applications in chemical manufacturing
(Slides)


Further discussion/dinner for those who would like at a local pub from 5.30.

 Please contact ben.parker@brunel.ac.uk with any queries.

Abstracts:
Steve Gilmour (King's College London) : Optimal two-level designs robust to model uncertainty
Two-level designs are widely used for screening experiments where the goal is to identify a few active factors which have major effects. In this paper, we apply the model-robust $Q_B$ criterion for the selection of optimal two-level designs without the requirement of level balance and pairwise orthogonality. We provide a coordinate

exchange algorithm for the construction of $Q_B$-optimal designs for the first-order maximal model and second-order maximal model and demonstrate that different designs will be recommended under different experimenters' prior beliefs. Additionally, we extend the definition of the $Q_B$-criterion to regular and irregular block designs and study

the relationship between this new criterion and the aberration-type criteria for blocks. Some trade-off between orthogonality and confounding will lead to different choices of block designs. Some new classes of model-robust designs which respect experimenters' prior beliefs are found.


R. A. Bailey (University of St Andrews): Valid restricted randomization for small experiments

Abstract:  If there is no inherent blocking factor in a small experiment, it may be decided not to use blocks, in order to have more degrees of freedom for the residual and hence more power for detecting treatment differences. However, in that case, complete randomization may produce a long run of plots with the same treatment. How should that be avoided? One common suggestion is simply to discard the undesirable layout and randomize again. This introduces bias, as it makes comparisons between neighbouring plots more likely to contribute to the estimators of treatment differences. When there is a single error term in the analysis of variance, a method of randomization is called strongly valid if the expected mean square for any subset of treatment comparisons is equal to the expected mean square for error if there are no differences between treatments. Here all these mean squares are averaged over all possible outcomes of the randomization. 

One way of achieving strongly valid randomization is to choose a permutation at random from a doubly-transitive permutation group. Applying a random permutation from such a group to a carefully chosen initial layout has the potential to avoid some bad patterns. 

In the context of clinical trials with only two treatments and sequential recruitment of patients, there is a second method using Hadamard matrices. Using this avoids the risk of large treatment imbalance if the trial is terminated early, as well as of long runs of a single treatment.

Yates proposed the term restricted randomization for any valid method that does not include the all layouts. Unfortunately, Youden introduced the term constrained randomization for the same thing. His method implicitly uses resolved balanced incomplete-designs. In my talk I shall describe recent joint work with Josh Paik using this method to produce a catalogue of tables which give a method of valid randomization for small experiments with a single line of experimental units.


Hugo Maruri-Aguilar (QMUL): Design of Computer Experiments

Abstract: Computer simulations are widely used as substitute of experiments in situations where physical experimentation is costly or even impossible to do. My talk will describe through examples some of the challenges associated with analyzing data and creating models for simulation experiments.

I will discuss designs through three examples: an infectious disease model, a model for a component of an engine and modelling motorway traffic data.


Antony Overstall (Southampton): Gibbs Optimal Design of Experiments

Abstract:  Gibbs (or generalised Bayesian) inference is a generalisation of Bayesian inference made by replacing the log-likelihood in Bayes' theorem by a (negative) loss function. The loss function identifies desirable parameter values for given responses. The advantage of Gibbs inference over traditional Bayesian inference is that it does not require the specification of a probabilistic data-generating process and, therefore, should be less sensitive to this process. This talk proposes Gibbs optimal design of experiments for this inferential framework, extended decision-theoretic Bayesian optimal design. The challenge is that the decision-theoretic approach relies on a probabilistic data-generating process that is notably absent from Gibbs inference. This is circumvented by assuming a designer model: a probabilistic data-generating process which is only used to find a design rather than in the ensuing inference. Because of this, the designer model can encapsulate very general data-generating processes with the aim of introducing robustness into the design procedure. The proposed Gibbs optimal design framework is demonstrated on several illustrative examples.



Tim Waite (University of Manchester): Replication in random translation designs

Abstract: In previous work we introduced random translation designs and showed that they have improved performance for model-robust prediction when compared to deterministic designs. We also introduced heuristics to allow quick construction of a good random translation design strategy from a classical V-optimal deterministic design. However, the current framework for random translation designs does not allow any of the design points to be replicated. As a result, when forming a random translation design from a traditional V-optimal design we must first split up any replicates in the V-optimal design. This raises an obvious question: is this splitting necessary, or a good idea? In this talk we show how to extend the framework of random translation designs to allow replication, meaning that splitting is no longer necessary. However, as we also show, it turns out that splitting is often the best choice from the perspective of prediction performance. 



Olga Egorova (King's College London): Multi-objective optimal planning of a split-plot experiment: a pharmaceutical application

Abstract: O. Egorova,  K. Mylona,  A. Olszewska, B. Forbes . The work presents an example of a pharmaceutical study -- a lab-based split-plot experiment that was planned to (1) explore the response surface with respect to various experimental factors and (2) control the prediction quality in the experimental region. Due to the size of the experiment, the inestimability of possibly present higher order model terms encouraged also including optimality criteria mitigating their potential effects on the inference. We explore a set of optimal designs with respect to various compound optimality criteria, discuss the specifics of this particular experiment and the main learnings of the study.



Robin Mitra (UCL) :  An integrated approach to test for missingness not at random
Abstract: Missing data is known to be an inherent and pervasive problem in the process of data collection. The effects are wide-ranging and the loss of data can lead to inefficiencies and introduce bias into analyses. The specific problem of data missing not at random (MNAR) is known to be one of the most complex and challenging problems to handle in this area and testing its prevalence is of great importance. The presence of MNAR missingness can only be tested using a follow-up sample of the missing observations and therefore recovering a proportion of missing values in an efficient way could be crucial in saving the experimenter's costs and time and may result in new treatments/technology reaching the public faster. We develop a strategy to allow researchers to be in a position to be well informed about whether MNAR is a credible issue. Within a multiple regression setting, we demonstrate a proof of concept example and provide recommendations for how the follow-up sample of missing observations should be designed. 



Dasha Semochkina (University of Southampton) : Optimal designs for nonlinear state-space models with applications in chemical manufacturing

Abstract: One of the first stages in chemical manufacturing is to create and calibrate a realistic model of the process of interest. Many chemical reactions can be expressed as nonlinear state-space models which are also widely used in other areas, including statistics, econometrics, information engineering and signal processing. State-space models depend on parameters to be calibrated and some control parameters.  We address the problem of systematic optimal experimental design for the control parameters for this class of models. We construct locally D-optimal designs by incorporating the calculation of the determinant of the Fisher Information Matrix. This allows us to identify a set of control parameters such that if experiments are run under those conditions, the remaining parameters could be estimated with the highest possible precision.