Workshop Lyon 2013

Workshop title: Why don't our models work

Organiser / working group: JCUD Data and Models Working Group

Chair(s): Manfred Kleidorfer (manfred.kleidorfer@uibk.ac.at), David McCarthy (david.mccarthy@monash.edu)

The intent of this workshop WAS to explore the reasons why our urban drainage models often produce poor results (including, but not limited to, our lumped and conceptual water quality models). We invited urban drainage modellers to come to this workshop and present what they considered the single most significant factor for “Why don’t our models work?”. Examples included: model structure, model implementation (including spatial discretisation), input data issues, calibration data issues, etc. The main difference of this workshop compared to usual conference presentations was an honest and open discussion about problems in different modelling studies. 

The workshop heard from around 6 groups, with some differences in opinion, thereby creating a good atmosphere for discussion. The envisaged outcome/output wasto produce some joint publications which test the hypotheses presented in the workshop; i.e. test the six different opinions using similar datasets and models. 

Outcomes

Blackboard activity: Just to break up the day a bit, we decided to do a small activity. All attendees were given a cross and a tick, and then they were asked to place these next to a list of the typical factors blamed for why our models do not work. The cross went next to the source which the attendee thought was the lowest cause for why our models do not work, while the tick was placed next to the source which the attendee thought was the most probable reason for why our models do not work. The results were very interesting - have a look yourself!  

Future joint papers: The workshop concluded with a discussion about what we can do together in the form of joint papers. We are hoping that we can have another workshop in 2014 at the ICUD where some of us can present the findings from their work. The following ideas were listed as possible joint publications - listed also are the key leaders of each topic, so if you are interested, please contact them. It is noted that all leads were supposed to send around an email to the attendees (see list above) by the end of the year (at the latest!).

1. Comparative analysis - paper about why our models don’t work (compare models, data, etc.) to determine which sources of uncertainty are the greatest

2. Overarching paper – about modelling aims and expectations

3. Bayesian Model Averaging – a method to combine the predictions of multiple competing models, and jointly assessing their predictive uncertainty.

4. Maybe our simple models can work better - the true impacts of spatial representation and variability