Millikan’s notion of “substance templates” plays a central role in her account of cognition and meaning. However, the notion remains underexplored: What are substance templates? Are they empirically grounded entities or purely theoretical posits? Are they necessarily innate, or can they be acquired ontogenetically? How do they contribute to content fixation as advertised?
To address these questions, I draw on cognitive and developmental psychology to propose a guiding hypothesis: substance templates are analogous to the so-called “overhypotheses” in Bayesian models of learning, realised as a form of “attentional filter”. These filters can be acquired ontogenetically via establishing determinable-determinable associations, and they serve to fix reference to a domain by singling out those relevant featural dimensions useful for learning about that domain. I explain how my approach to substance templates helps block the objection that substance templates are incompatible with a naturalistic and externalistic framework since they must be semantic entities like “mental theories”.
It has long been suggested that associative learning involves more than the “blindly” pairing of co-occurring stimuli. For example, Rescorla once argued that associative learners in fact make use of perceptual relations and prior expectations to construct sophisticated models of the environment. Recent work on the “latent-cause” model of associative learning fleshes out the idea by suggesting that classical conditioning is based on inferences over hidden causal structures, as opposed to a form of brute pairing.
I hypothesise that latent-cause-based associative learning stands as an evolutionary precursor to the “mental file” architecture in human cognition: both mechanisms seek to organise information by mirroring the causal structure of the environment. I argue that highlighting this functional contiguity requires attributing referential or Russellian content to the underlying representations.
Many theorists hold that we can sometimes think directly about an object in the world by standing in an appropriate perceptual relation to it. A prominent view suggests that this perceptual relation is provided by so-called “object files”. On this view, perhaps called “object-file singularism”, singular thoughts are modeled as perceptual demonstratives, with object files playing the role of a pointing gesture plus a demonstrative term "this/that". Typically, this model assumes that perceptual demonstratives function like simple demonstratives like “that is F”, rather than complex demonstratives like “that G is F.”
In this paper, I argue for a pluralist view: perceptual demonstratives can also function like complex demonstratives, as object files do sometimes consult featural information—akin to using nominals—during object tracking. Importantly, drawing on experimental evidence, I argue that perceptual demonstratives can behave like complex demonstratives in two distinct ways, depending on the extent to which misidentifications are allowed by the scene.