Stuart Rossiter's contributions, 19 Nov 2010

Regarding the discussion of Epstein's "Why model?" paper, 19 November 2010.

Anyway, some thoughts below. If I can't make it in person, feel free to either specifically mention them or just digest personally. (Feedback would be useful of course.)

1. I broadly agree with Epstein's view and Troitzsch's clarification of 'prediction'. However, I think the thread of the discussion still jumps too specifically onto a semantic dissection of 'prediction', when the basic argument is really quite simple.

In essence, Epstein is discussing 'why model?' in general. Thompson & Derr have (falsely) limited the discussion by assuming that the 'why' can only mean 'for scientific advance' (and thus become horrified that he seems to downplay prediction). Epstein 'downplays' it precisely because he wants people to realise that a model as a precise, computational (falsifiable) representation of an argument/thought experiment is useful in itself (and for the scientific frame of mind he talks about). This then leads into the separate debate in the social sciences (see Moss/Edmonds and companion modelling) on how far usable predictive accuracy is unachievable, and subjective stakeholder belief-in-the-representation is more important.

In terms of Thompson & Derr's more specific scientific validity angle, I think it is a given in science that predictive power is the strongest criterium of adequacy for a theory, and that this can proceed on a 'sliding scale' from broad qualitative prediction to specific quantitative ones.

(Though this is itself more subtle; e.g., chaos theory can predict precise quantitative values for qualitative phase changes, so it's not just about predicting an exact state. Or, rather, it depends on the definition of 'state', which doesn't have to be a reductionist solution of exact attribute values for the smallest individual components.)

It's also accepted that use of models to 'discover new relationships and principles' is a scientifically valid usage (e.g., chaos theory period doubling); this is 'prediction' in the sense of 'here are potential new principles that we didn't realise before that may therefore provide a basis for prediction of behaviour that wasn't previously considered'; i.e., this is 'prediction of predictive utility'!

2. I think Axelrod's 1997 paper is almost identical to Epstein's one, but phrases it a little better IMO (Advancing the art of simulation in the social sciences; attached). It is also useful to compare Troitzsch's classification of predictive power here with Roughgarden et al.'s 1996 scale of system specificity of application: from 'minimal models for ideas' to 'minimal models for a system' to 'synthetic models for a system' (Adaptive computation in ecology and evolution: a guide for future research).

I don't agree with (elements of) Roughgarden et al.'s coupling of system specificity here. A very specific real-world system may, in fact, make a more general model more applicable: e.g., a very specialised ecosystem with a predator-free dominant species. Again, there is an element of semantics here: it depends on what you mean by 'specific' and how this relates to 'complex'. Using my example, this is taking a difference in meaning between specificity = real-world specificity vs. specificity = theory-guided specificity. (My example is real-world specific, but abstract/simple from a theory-guided frame.)

I've left this a bit messy but, in general, this debate on specificity is I think crucial in pairing up with Troitzsch's categories.

3. I think the oft-trotted-out example of Babylonian (or Ptolemaic) predictive power but explanatory invalidity is also subtly over-simplified. Ptolemaic epicycle theory is 'wrong' in the sense of the true causative mechanisms of motion (planet's don't move on invisible spherical surfaces/circles). However, it is presumably accurate because, geometrically, these combinations of epicycles can provide a good approximation to elliptical paths (??). Thus, this can be seen as a kind of 'engineering approximation' to the correct theory, albeit arrived at by an incorrect belief in causative mechanisms. I think this is strongly related to the idea of a functional explanation of a system, rather than a causal one. This is described very well by Grune-Yanoff (paper also attached; see esp. pp.20-21). The difference here is that, in his example of the climate model with artifical 'instability dampening component', this is knowingly not a correct causal mechanism, but the fact that it produces better alignment with data means that it hints at the functional nature of the 'missing pieces' in the causal mechanisms.

(There's a lot more to think about here. Of course, epicycles are not an 'engineering approximation' in the normal sense, since they're actually more complex to calculate. It's more like that this is the closest approximation to the 'real' theory given an assumption of an Earth-centric universe. However, it is only in retrospect that we're able to say that Ptolemy's ideas didn't provide a functional explanation fruitful for further development; we similarly don't know in advance for the climate model Grune-Yanoff discusses.)