26 November 2013

Post date: Nov 27, 2013 12:43:52 PM

Discussion on Waldherr & Wijermans' JASSS forum paper (from a SIMSOC mailing list survey) on criticisms received for social simulation in its 'target' areas and how one might overcome them.

I (Stuart) have linked to this summary in a SIMSOC posting since the authors asked for feedback on the paper.

Participants:

Stuart Rossiter

Jason Noble

Eric Silverman

Joe Viana

Elisabeth zu Erbach-Schoenberg

Jason Hilton

Sandra Mueller

Lewys Brace

Babak Ardestani

Summary of Points

[Please bear in mind that this was only an hour's informal discussion.]

  1. It felt kind of implicit that the categories of criticism put forward were 'invalid' criticisms, yet there are sub-flavours of most (all?) of them which are potentially valid; e.g., that a model is too complex because a simpler one exists which fits the data as well as your one (and you haven't justified what your model brings in addition, such as a more plausible causal mechanism).
  2. As a converse to this, are there other categories of criticism which do tend to be valid? As in the above, one general category is "Your model adds nothing to the field" (i.e., no better empirical accuracy, no richer theory and no 'better' methodology) though obviously there is still a lot of subjectivity there.
  3. We assumed the context is for social simulations in general (not just ABM), but is there anything specific to simulation cf. other modelling? We couldn't think of anything off the top of our heads.
  4. As well as "selling social simulations to sceptical minds", there is the obverse of selling them to "gullible" :-) minds; i.e., there are real issues with lack of scrutiny of model mechanisms, being swayed by nice visualisations, etc.
  5. The set of criticisms (and way to frame responses or re-frame the model in the first place) are a nice 'checklist' for modellers to consider potential criticisms whilst writing up the research.
  6. In terms of strategies then, as well as what to write/say, there's the aspect of how to arrive at this. The obvious tactic is to do the work in collaboration with someone from the target field who can anticipate responses and frame things accordingly (or be that person originally trained in the target field and having somewhat 'rejected' it à la Scott Moss or Paul Ormerod). If one attempts to do this yourself (for a publication in the target field), you can make your argument worse by giving naïve interpretations of the target field's principles.
  7. The suggested responses are quite 'passive', in the sense that another option is to 'fight back' and point out deficiencies in the target field's theory/methodology. (Having said that, this can be viewed as just a particular flavour of the suggested meta-communication/relate strategies.)
  8. There is a wider question of when it is worth trying to publish in the target field's literature (touched on in the 'don't enter the cycle' bit). Disciplines and sub-disciplines arise precisely because of core theoretical/methodological divergences, and tend to attract people who naturally(?) think in those particular ways (thus being a priori hard to convince). Arguably, it is often better to publish in the social sim literature and use contacts and networking to try to convince 'others' if so desired (where there is often more time and scope---and goodwill---for the discussion than in a space-restricted, 'competitive' publication context with the vagaries of peer review).
  9. Up-front aims of the research are crucial, and very audience-dependent: if you don't specify these clearly, you'll probably fail (despite following the paper's ideas in the rest of the write-up); e.g., if the audience thinks you're trying to achieve predictive accuracy, when you're actually aiming for broader qualitative fits. Troitzsch's 'levels of prediction' and [shameless plug] Rossiter et al.'s scientific positioning characterisation look at this: http://jasss.soc.surrey.ac.uk/12/1/10.html and http://jasss.soc.surrey.ac.uk/13/1/10.html.
    1. We debated whether a policy audience is fundamentally different to an academic one, but weren't sure: most policy-makers follow their academic training anyway, but will tend to want point estimates which fit with their budget-related decision-making needs. There's been focused discussion on that at things like the OR Society "Are we there yet?" workshop, summarised in Hoad & Watts: http://dx.doi.org/10.1057/jos.2011.19.