Recent site activity

Writings‎ > ‎

Immunizing Strategies & Epistemic Defense Mechanisms


(to appear in Philosophia, DOI: 10.1007/s11406-010-9254-9)
 

 

 

Abstract

An immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence. By contrast, an epistemic defense mechanism is defined as a structural feature of a belief system which has the same effect of deflecting arguments and evidence. We discuss the remarkable recurrence of certain patterns of immunizing strategies and defense mechanisms in pseudoscience and other belief systems. Five different types will be distinguished and analyzed, with examples drawn from widely different domains. The difference between immunizing strategies and defense mechanisms is analyzed, and their epistemological status is discussed. Our classification sheds new light on the various ways in which belief systems may achieve invulnerability against empirical evidence and rational criticism, and we propose our analysis as part of an explanation of these belief systems’ enduring appeal and tenacity.

Keywords: immunizing strategies; epistemic defense mechanisms; pseudoscience; belief systems


“A successful pseudoscience is a great intellectual achievement. Its study is as instructive and worth undertaking as that of a genuine one.” - Frank Cioffi (1998, 115)

 

1                 Introduction

Skeptics of pseudoscience and the paranormal have been amazed and sometimes exasperated about the enduring popularity of beliefs that are either very implausible or impossible from a scientific and rational perspective (Benassi, Singer et al. 1980; Shermer 2002; Hines 2003). Although many of these belief systems have been thoroughly debunked, the critical efforts of skeptics are mostly unavailing. In this paper, we discuss the remarkable recurrence of immunizing strategies and defense mechanisms, which play an important role in the tenacity of these belief systems. We define an ‘immunizing strategy’ as an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence[1]. By contrast, an epistemic ‘defense mechanism’ is defined as an internal structural feature of a belief system, which has the same effect of deflecting rational arguments and empirical refutations.

1.1         The demarcation problem

The idea of immunizing strategies sometimes occurs in the philosophical debate about the demarcation problem, i.e. the problem of finding criteria for distinguishing science from non-science. Karl Popper famously argued that the most distinctive feature of the scientific attitude is the willingness to take bold empirical risks, and that a theory can only be regarded as scientific to the extent that it is open to empirical refutation. Of course, resorting to immunizing tactics to protect one’s theory from falsification is doing exactly the opposite of taking empirical risks, and hence, according to Popper’s view of science, it is the hallmark of pseudoscientific thinking. However, naive falsificationism has been widely abandoned in philosophy of science (Laudan 1983), and the enthusiasm for the demarcation project has waned significantly. A more sophisticated philosophy of science accepts that every scientific research programme builds up a “protective belt” of auxiliary hypotheses around its “hard core” claims (Lakatos and Musgrave 1970; Lakatos 1968)Thus, to a certain extent ‘immunizing strategies’ can be found in bona fide science as well, and scientists are certainly not immune to the use of bad arguments to deflect valid criticism (Hines 2003; Park 2002).

Nevertheless, Popper’s basic insight about the importance of boldness in conjecture making is still valuable, and survived in a more sophisticated form in Lakatos’ philosophy of science. According to Lakatos, if auxiliary hypotheses are entirely ad hoc and merely decrease the empirical content of a theory, we have a case of a degenerating research programme. Therefore, the reliance on defensive maneuvers, evasive arguments and ad hoc excuses is still widely regarded as telltale features of pseudoscientific discourse (e.g. Derksen 1993; Hines 2003).

1.2         Overview

In this paper we are not involved with immunizing strategies as symptomatic of pseudoscience. Instead, we present a classification of different types of immunizing strategies and defense mechanisms, and we discuss their epistemological status. To be sure, many of the belief systems we discuss are traditionally regarded as pseudoscience, but as we are not particularly interested in demarcation problems, we will also offer examples from different domains, for instance cult belief systems, pseudo-philosophy, magic and religion. Indeed, one of the main purposes of this paper is precisely to call attention to the pervasiveness of immunizing strategies and epistemic defensive mechanisms across widely different domains.

We provide an overview of the different ways in which a belief system can be rendered immune from criticism and adverse evidence, but our list is not intended to be exhaustive. In discussing these examples, we will also notice that it is often somewhat arbitrary to separate the theory-as-such from the ‘immunizing strategies’ used by its defenders. Thus, the strict distinction between immunizing strategies and defense mechanisms will be called into question.

2                 Theory change and degenerating research programmes

As many of our examples are taken from bad science or pseudoscience, we first want to clarify our notions of immunizing strategies and defense mechanisms from a history of science perspective, and relate them to the concept of theory change and progressive/degenerating research programmein science.

Historians of science have described interesting cases of theories which, while initially promising and respectable in the scientific community, ran into serious problems over time. As ever more anomalies and problems turned up for these theories, they eventually ‘died the death of a thousand excuses’. In Lakatos’ terminology, they turned into degenerating research programmes. Examples include phrenology, the theory of ‘cold fusion’ in physics, Lamarckism in biology, Marxist theories about the law of profit and the demise of capitalism, or more recently, the Duesberg hypothesis on the non-infectious nature of AIDS. Although it is often difficult to identify a moment when a theory or a research programme collapses under the weight of these difficulties, it is widely acknowledged that there are certain indications of a degeneration into bad science or pseudoscience.

On the one hand, advocates of a theory may resort to certain generic strategies for protecting a cherished theory from mounting adverse evidence: cherry-picking the data, shooting the messenger, distorting findings, special pleading, discrediting the methods employed in research with unwelcome results, accusing the new ‘orthodoxy’ of a hidden agenda etc. These generic methods can be broadly construed as ‘immunizing strategies’, but they are not particularly interesting from a philosophical perspective, and we will not be much concerned with them in this paper.

On the other hand, a theory-in-crisis is often belatedly modified by its advocates so as to be less vulnerable to refutation, by introducing ad hoc elaborations and special clauses that explain away apparent failures and reduce the empirical content of the theory. As every scientific theory makes use of a protective belt of auxiliary hypotheses, these amendments can seem scientifically respectable at an early stage, and there is often no clear point at which they collapse into pseudoscientific ‘immunizing strategies’.

Insofar as it is possible to separate the original theory from later (pseudoscientific) modifications, we prefer to use the term ‘immunizing strategies’, which are brought forward at some point to rescue the original theory from refutation. However, in more complicated cases, these protective strategies progressively become integral part of the theory proper (see examples of parapsychology and psychoanalysis in 3.5.1). They are not any longer ‘strategies’ to which its advocates resort when the theory runs into trouble, but they have become integrated in the explanatory resources and conceptual structure of the theory. We designate these internal, structural features as epistemic ‘defense mechanisms.

In some interesting cases, as we will see, the defense mechanisms of a belief system even follow naturally from its conceptual nucleus (see the example of conspiracy theories in 3.3). As a result of their inbuilt defense mechanismsthese belief systems exhibit a self-perpetuating rationale, and are particularly interesting from an epistemological perspective. We will briefly return to the conceptual distinction between immunizing strategies and defense mechanisms in 4.1.

3                 Immunizing Strategies & Defense Mechanisms

3.1         Conceptual equivocations & moving targets

Pseudoscientists often make use of conceptual equivocations to transform their theory into a moving target. This may be achieved in two different ways: either one makes a series of ambiguous and open-ended claims that are construed in such a way that one can conveniently switch back and forth between specific and broad interpretations. Alternatively, one defends a theory that appears specific and exciting on a first inspection, but when it runs into trouble, one belatedly deflates it to make it trivial or uninteresting. The two immunizing strategies are sometimes difficult to distinguish, and some successful pseudoscientists will use them in tandem.

3.1.1            Multiple endpoints

Skeptics have often remarked that astrologers and soothsayers shy away from making bold and specific statements. For example, the predictions in a horoscope typically have multiple endpoints (Gilovich 1991, 58-59)so that they can be matched retrospectively to almost any event. In fact, a ‘good’ horoscope contains predictions that are amenable to both specific interpretations and a range of broad and metaphorical interpretations. Inevitably, people will perceive some matches with real events, and they will immediately see this as the intended or real interpretation of the prediction, ignoring the other ways in which it might have been ‘borne out’. As a result, they will be unduly impressed by its accuracy[2]Even if not all predictions yield an uncanny match with a real event, the astrologer – or the naive interpreter – can always resort to one of the broad interpretations, thus avoiding the impression that a prediction has failed. Naturally, people will tend to remember the predictions of the first category. Thus, the technique of multiple endpoints creates an in-built asymmetry between what will count as hits and misses for astrological predictions, the effect of which is to immunize astrology against refutations.

As an example of this asymmetry, consider the case of Nostradamus’ prophecies, which, as is well-known, allow for almost unlimited allegorical and metaphorical interpretation (Marks 2000, 262-266; Hines 2003)From the moment interpreters perceive a ‘fit’ with actual historical events, the congruency seems so compelling that they are unable to read the prophecies in any other light. The abundance of quatrains and the problem of multiple endpoints guarantee that people will find a lot of tenuous matches, some of which even look striking. As for the other predictions, one can readily persist that these have yet to be borne out, or that their ‘true’ meaning has not yet been discovered.

As there is nothing in astrological theory that dictates the use of equivocations and multiple endpoints, we may regard this technique as an immunizing strategy used by some astrologers to forestall predictive failure, as opposed to a defense mechanismOn the other hand, the practice is so common that it has become inseparable from the field of astrology, and several authors have devised convenient rationalizations for it. For example, Nostradamus explained that he deliberately obscured his predictions so as to avoid persecution by the Inquisition.

3.1.2            Deflationary revisions

As an example of the second type, consider the case of the Jehovah’s witnesses who, after the prediction of the Second Coming of Christ in 1873-74 failed to come true, argued that Christ had returned as predicted, but as an invisible spirit being (Zygmunt 1970, 931)Zygmunt has demonstrated that, over the course of their history, Jehovah’s witnesses have consistently “redefined [failed prophecies] in retrospect in a manner which provided nonempirical confirmation” (Zygmunt 1970, 934)Often enough, these were deflationary revisions of the original prediction.

Philosopher Frank Cioffi has documented an interesting case of belated deflationary revisions in Freud’s theory of the libido. As is well-known, Freudian psychoanalysis makes the sweeping claim that the root of all neuroses is to be found in the repressed ‘libido’. Freud’s intended interpretation was clearly sexual. For example, one can only understand why fathers threaten their sons with penile amputation if one accepts that the desires of the sons were very carnal indeed. Freud elevated this sexual etiology of all neuroses to a central dogma of psychoanalysis, and he derided others when they were compromising on this point. When presented with empirical difficulties, however, Freud resorts to just such a fuzzier interpretation, widening the scope of the libido concept so as to make it encompass “what Plato meant by ‘Eros’ and St. Paul by ‘love’” (quoted in Cioffi 1998, 16). For example, in Freud’s explanation of the ‘war neuroses’ following the First World War, only a deflationary interpretation of libido as a general kind of self-love allowed him to maintain his sweeping universality thesis. In the case of Freudian psychoanalysis, the equivocations surrounding the concept of libido (and other pseudoscientific concepts, see 3.3.2 and 3.2were arguably always an integral part of the theory, so that one my properly speak of a defense mechanism.

The strategy of belated deflationary revisions is also rampant in a great deal of postmodernist and social constructivist literature, where it is used in tandem with a maneuver in the opposite direction. André Kukla has coined these strategies “switcheroos” and “reverse-switcheroos”:

One commits a switcheroo by starting with a hypothesis that's amenable to a range of interpretations, giving arguments that support a weak version, and thenceforth pretending that one of the stronger versions has been established. (2000, x)

What Kukla terms “reverse switcheroos” corresponds to what we term a deflationary reinterpretation:

you put forth a strong version of the hypothesis, and when it gets into trouble, you retreat to a weaker version, pretending that it was the weaker thesis that you had in mind all along. Switcheroos and reverse switcheroos can be performed in tandem, and the cycle can be repeated ad infinitum. A judicious application of this strategy enables one to maintain an indefensible position forever. (2000, x)

A skilled pseudoscientist switches back and forth between different versions of his theory, and may even exploit his own equivocations to accuse his critics of misrepresenting his position. Philosopher Nicholas Shackel has termed this strategy the “Motte and Bailey Doctrines” (Shackel 2005; see also Fusfield 1993), after the medieval defense system in which a stone tower (the Motte) is surrounded by an area of open land (the Bailey):

For my purposes the desirable but only lightly defensible territory of […] the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. (Shackel 2005, 298)

Analogous to Kukla’s analysis of switcheroos, Shackel argues that a successful application of this strategy requires a “systematic vacillation between the desired territory and retreating to the Motte when pressed.” (Shackel 2005, 298) Again, this retreat to the Motte corresponds to what we call deflationary revisions.

Some recent examples of these Motte and Bailey strategies are found in the literature of Intelligent Design Creationists. For example, the central concept of “irreducible complexity” by ID advocate Michael Behe vacillates between an empirically adequate but somewhat trivial observation, and an exciting but completely unfounded claim (Boudry, Blancke et al. 2010). Behe writes that a system is irreducibly complex when the removal of any one of its components lead to a breakdown in functioning. He has argued that “any precursor to an irreducibly complex system that is missing a part is by definition non-functional” (Behe 2006, 39), and that therefore evolution by natural selection is ruled out. However, evolution by natural selection often works by indirect routes and by co-opting existing systems to perform other functions. When pressed on this point, Behe retreats to a deflationary interpretation of irreducible complexity, which simply amounts to the claim that some biological systems cease functioning when one or more components are removed. But after he has given arguments for this defensible but uninteresting position, Behe again proceeds to use his concept as though it posed a major problem for evolutionary theory. This equivocation, which allows Behe to keep on moving the goalposts, is inherent in the very definition of irreducible complexity, so that one may regard it as a defense mechanism of Behe’s ID theory.

The work of William Dembski, another leading theorist of the ID movement, is similarly based on bait-and-switch strategies. In the use of his notion of complex specified information (CSI), Dembski continually switches back and forth between Shannon’s mathematical definition of information, which is simply a measure of randomness, and the common notion of information as “meaningful message”. (Perakh 2004, 64-75).

3.2         Postdiction and feedback loops

Unobservable entities are routinely invoked in scientific explanations, and there is nothing wrong per se with theories that make use of them. However, if particular defense mechanisms are present, belief systems about unobservable entities and their causal workings are completely immune from falsification. The recipe for such a belief system is as follows: postulate the existence of certain invisible or imponderable causes to account for a range of phenomena, and maintain that the working of these causes can only be inferred ex post facto from their effects. Provided that the effects themselves are hard to assess, and that the causal relations in the belief system are sufficiently underspecified, subtle feedback loops between belief system and observations will keep it forever outside the reach of empirical refutation.

As an example, consider the belief in the efficacy of rituals and magical interventions. Anthropologists have noted that the question whether a performed ritual is ‘genuine’ is often underspecified by its constitutive components, and can only be determined ex post facto dependent on the expected outcome. If the result is successful, one can infer that the intervention was the right one and was properly performed. If it was not, obviously ‘something must have gone wrong’ during the intervention, or the intervention was not of the appropriate type. Indeed, the very idea of a failed ritual does not make sense, because any apparent failure ‘shows’ that it was just not performed properly, or not with the right material, or that some other and equally invisible force interfered with the ritual (additional immunizing strategies are possible). According to anthropologist Evans-Pritchard, belief in ritual efficacy is protected by a whole repertoire of “secondary elaborations” for explaining away particular failures in the expected effects. (Boyer 1994, 207). In this way, the general causal principles themselves remain immune from disconfirmation.

As a result, the taxonomic identification of objects as having certain magical propertiesor the identification of a person as a ‘real’ shaman, tends to feed back into each other, engendering a vicious circularity. As Pascal Boyer noted: “taxonomic assumptions are the basis of causal expectations, and conversely, causal expectations lead to innovations or corrections in the taxonomic identification” (Boyer 1994, 144)For example, the efficacy of a magic spell to chase away evil spirits is assessed on the basis of how the patient’s condition develops, but the question whether the patient is now really liberated from these evil spirits is itself determined by the ‘genuineness’ of the magic spell (which may depend on the conditions of the exorcism or the reputation of the healer).

As another example, consider the belief in the therapeutic power of healing crystals, chakra stimulation or even homeopathy. On the one hand, the causal relations in these belief systems are always underspecified: what kind of crystal are appropriate for which patients, how long it takes for chakras to open, what kind of homeopathic medicine is suitable for which patient. Different interventions are ‘allowed’ by the belief system, and the one that coincides with the moment of recovery can be used to construe the apparent cause. On the other hand, the therapeutic effects themselves are often difficult to assess objectively. For example, what exactly are the visible results of having one’s energy levels restored’ againor having one’s ‘chakras released’, according to alternative therapists? As a result of these defense mechanisms, causal inferences feed back into each other in a way that always protects the belief system from refutation.

The technique of postdiction also underlies an elegant rationalization for data mining, confirmation bias and explaining away null results in parapsychology (Gilovich 1991, 21). Parapsychologists have tried out a whole range of different experimental set-ups and procedures to summon psi phenomena. When confronted with a pattern of alternate successes and failures, many parapsychologists have explained that psi is an elusive and unpredictable force. As a result, they find it easy to interpret the patterns of hits and misses ex post facto as the result of the intermittent workings of the psychic power, and to explain away failures as due to settings that were simply nopsi-conducive. James Randi recounts the remarkable case of a water diviner who, after being queried why he did not count his failures in a series of experiments, replied that “obviously, when I fail, the powers arent working at that time, and, after all, Im counting percentages on the cases where Im divining, not when Im just guessing!” (Randi 1981, 13) Randi notes that even many highly regarded psi experiments include a series of preliminary “warm-up” sessions. In this context, the technique of postdicting psi-activity is very tempting: “Ok, that was just warming-up” – “It seems I’m getting a little tired” – “There are obviously bad vibes around that are distracting me”. On a more academic level, postdiction is often used for rationalizing the practice of data-mining. As Richard Wiseman noted, if pool of parapsychological experiments is sufficiently extensive and heterogeneous, it is not difficult to “’explaining away’ overall null effects by retrospectively identifying a subset of studies that used a certain procedure and yielded a significant cumulative result” (Wiseman 2010, 38).

Depending on one’s understanding of parapsychological theory, the practice of postdiction can be regarded as an immunizing strategy or a defense mechanism. If one holds that the characterization of psi as elusive and unpredictable is just a pseudoscientific excuse that has nothing to do with parapsychology proper, one may regard it as an immunizing strategy. By contrast, if one maintains that the elusive and unpredictable nature of psi is a central thesis of parapsychology (see 3.5.1), one may properly call it a defense mechanism of the belief system.

A beautiful example of postdiction can also be found in, once again, Freudian psychoanalysis. In the etiology of psychological illness, Freud hypothesized that there is an unobservable ‘quantitative factor’ in the patient’s libidinal economy that had to be taken into account. After all, according to psychoanalysis, ‘normal’ persons harbour the same repressed wishes and complexes that are found in neurotic patients. The difference between the two groups is only to be found in the quantitative factor in their mental economy, which ultimately determines if and when an unconscious complex will develop into neurosis. Tellingly, Freud admitted that this factor could only be inferred ex post facto to account for the unexpected presence or absence of any given symptom.

We cannot measure the amount of libido essential to produce pathological effects. We can only postulate it after the effects of the illness have manifested themselves. (Freud 1924, 119)

3.3         Conspiracy thinking

3.3.1 Turning the evidence on its head

Conspiracy theories are very interesting from an epistemological perspective, and certainly deserve a more extensive discussion than the one we can offer within the confines of this paper (Clarke 2002; Keeley 1999). For our present purposes, we want to highlight the fact that all conspiracy theories share a fundamental template of epistemic defense mechanisms. Conspiracy theories purport to provide an explanation of a historical event that differs markedly from the received view or official account. According to conspiracy theorists, the event in question was brought about by a group of actors who have been secretly pulling the strings behind the scenes, and who have tried to cover up their actions by spreading a false story. This false account is the received view, which they are trying to fool us into believing. However, the conspirators have not been completely successful, and they have left traces that allow the conspiracy theorist to reveal their evil plot.

Conspiracy theorists point to incongruities and anomalies in the official account of events, and try to account for these by constructing a unifying alternative explanation. Brian Keeley (1999, 118) has termed these the “errant data” with which conspiracy theories are constructed, and he distinguishes two classes: data that are unaccounted for on the official account, and data that actually contradict it.

However, if the conspiracy hypothesis should fail to be confirmed by further investigations, or if new evidence should turn up that flatly contradicts it, conspiracy theorists typically turn the evidence on its head, arguing that an apparent accordance with the official story is of course predicted by their theory. After all, successful conspirators may be expected to deliberately lay out forged evidence to lead us astray, to cover up the traces of the secret plot, to bribe those who witnessed the cover-up, etc. As Clarke (2002, 135) notes: “the apparent plausibility of the nonconspirational received view is a consequence of the success of the cover story or cover-up, according to conspiracy theorists”. This pattern of epistemic defense mechanisms, in which any apparent contradiction can be turned into a confirmation, is common feature of all global conspiracy theories[3].

Thus, confronting ardent conspiracy theorists with adverse evidence and eyewitness accounts is generally to no avail. In the believers’ eyesthis apparent evidence merely constitutes further proof of the cunning and power of the conspirators. The epistemological situation of the conspiracy thinker reminds one of the hollow face illusion. Either way we look at the mask of a hollow face, from the front or from behind, we always ‘see’ a normal convex face staring at us (Gregory 1997). In a similar way, no matter how the evidence turns out, the conspiracy theorist always ‘sees’ the action of conspirators.

3.3.2 Explaining the motives for disbelief

A special defense mechanism implicit in conspiracy theories allows the believer to explain the existence of disbelievers within the framework of the belief system itselfFor example, Sigmund Freud thought he was able to account for the ‘resistance’ of his opponents in psychoanalytic terms:

Psycho-analysis is seeking to bring to conscious recognition the things in mental life which are repressed; and everyone who forms a judgment on it is himself a human being, who possesses similar repressions and may perhaps be maintaining them with difficulty. They are therefore bound to call up the same resistance in him as in our patients; and that resistance finds it easy to disguise itself as an intellectual rejection and to bring up arguments like those which we ward off in our patients by means of the fundamental rule of psycho-analysis. (Freud 1957, 39)

Thus, people attack psychoanalysis because they themselves harbor the repressed wishes and complexes revealed by the theory. Being under the spell of unconscious forces, the critics are not even aware of their unconscious motivations, because these are ‘disguised’ for them as rational arguments.[4] As a consequence, any objection, however seemingly reasonable, can be dismissed by the psychoanalyst as unconscious resistance in disguise (Gellner 1985). Hence, it is the perfect joker card of the pseudoscientist. Defenders of Marxism sometimes use a similar immunizing argument, labeling criticism from outsiders as manifestations of ‘bourgeois class consciousness’, thus demonstrating the very theory the critics were objecting to.[5]

The argument from resistance is not just a form of rhetoric that some psychoanalysts happen to resort to in the face of valid criticism – rather, it is “an imperative emanating from the heart of the psychoanalytic vision” (Crews 1986, 14). Indeed, if Freud’s model of the human mind were accurate, we would expect the kind of disguised resistance he was alluding to. Hence, in our terminology, the argument plainly is an epistemic defense mechanism of the psychoanalytic belief system.

Essentially, the argument from resistance is structurally identical to any form of conspirational suspicion that takes the attacks of critics as lending further support to the belief system. This syle of reasoning is remarkably widespread, even outside classical conspiracy theories. For example, many creationists believe that evolution is an invention of the devil to deceive faithful Christians and lure them into disbelief. For example, Henry Morris, co-author of the seminal work The Genesis Flood that sparked the Young Earth Creationism movement in the 1960s, actually believes that the theory of evolution was given by Satan himself to Nimrod, at the Tower of Babel. Morris wrote that “Behind both groups of evolutionists one can discern the malignant influence of 'that old serpent, called the Devil, and Satan, which deceiveth the whole world'.” (Morris 1963, 93)

In his conspiracy book on UFOs and alien abductions, history professor David Jacobs explains that the evidence for his views is so weak and sketchy because the aliens have carefully installed a “wall of secrecy” (Jacobs 1998, 117): they “cloud” the experience of their abductees, implant false memories, and they “perceptually alter potential witnesses” (Jacobs 1998, 112). In this way, skepticism and disbelief is easily explained: “The aliens have fooled us. They lulled us into an attitude of disbelief, and hence complacency, at the very beginning of our awareness of their presence.” (Jacobs 1998, 258) An even more extreme example of this defense mechanism is found in the way Scientology members handle criticism from outsiders. Notoriously, Scientologists systematically try to silence their critics by spreading false allegations and smearing their reputation[6]. In an internal policy letter, founder Ron L. Hubbard makes clear that critics can only have one motive for attacking Scientology:

There has never yet been an attacker who was not reeking with crime. All we had to do was look for it and murder would come out. […] They fear our Meter[7]. They fear freedom. They fear the way we are growing. Why? Because they have too much to hide. (Foster 1971, 134)

Interestingly, this alleged motive for attacking the Church is explained by the theory of Dianetics in terms that are almost identical to Freudian psychoanalysis. According to Scientologists, people naturally have a ‘reactive mind’ full of unconscious impressions and traumas that are called ‘engrams’. Members of the Church are called Clears, because they are liberated from the influences of this reactive mind. However, the non-members of the Church, which are called ‘pre-clears’, are still struggling with their engrams, and they will try anything to hide them from view. Hence their attacking Scientology.

3.4         Changing the rules of play

Bundermining the standards of reasoning employed in a rational debate, one can safeguard one’s position from valid criticismIn many instances of this immunizing strategy, the very attempt at criticism is condemned as fundamentally misguided. Sometimes, reasons for this short-circuiting of criticism are dictated by the belief system itself, in which case we have to do with an epistemic defense mechanism.

For example, according to postmodernist philosophers and radical social constructivists, there are no objective canons of rationality, only different social constructions of rationality that are all equally valid. Therefore, occupying a position at all amounts to pretending that some positions are more defensible than others, which is already misguided. Therefore, the postmodernist tries to occupy what philosopher Nicholas Shackel has termed the “No-Position Position” (Shackel 2005, 311-319). This conveniently allows him to “use normative notions of rationality while evading accountability to rational standard.” (Shackel 2005, 312) The self-excepting nature of the “No-Position Position” reminds one of what the philosopher David C. Stove termed the Ishmael Effect, after Ishmaels epilogue to Melville’s Moby Dick: “and I only am escaped alone to tell thee”. It refers to the claimed ability of some philosophical theory to escape the fate to which it condemns all other discourse.[8] Because the postmodernist pretends not to be accountable to any normative notion of rationality, the very act of criticizing his ‘position’ misses the point, and thus postmodernism is rendered completely immune from criticism. Shackel has meticulously demonstrated that, while being obscured by the insinuation of the “No-Position Position”, self-refutation is inevitable in postmodernist discourse.

The postmodernist rejection of reason is an extreme example of stonewalling, but one can find other exponents of this argument, which are not so sweeping as to entail self-refutation. For example, in discussions about alternative medicine one often hears the claim that each person or patient is “radically unique”, thus frustrating any form of systematic knowledge about diseases and treatments. Of course, advocates of unproven medical treatments use this argument as a way to deflect the demand for randomized and double-blind trials to substantiate their therapeutic claims (Williams 1980; Gordon 1996). If each patient is radically unique, there is no point in lumping patients together in one treatment group and statistically comparing them with a control group. Homeopathy, for example, “considers the single patient as indivisible and unique [...] as not accessible to the method of measuring” (Guttentag 1940, 1177). Indeed, the whole idea of a classification systems of diseases is perceived by many advocates of alternative medicine as a form of greedy reductionism that eradicates the human subject. The argument is so convenient that it has been borrowed as an immunizing strategy by countless alternative therapists, including, inevitably, psychoanalysts.

3.5          Invisible escape clauses

A last popular immunizing strategy of pseudoscientists – and in some cases taking the form of a full-blooded defense mechanism – consists in the ad hoc invocation of invisible or imponderable causes that conveniently account for a pattern of observations that should have been expected when the theory were false. As in the case of conceptual equivocations above, the availability of these escape clauses is initially obscured, and they come out of the closet only when the theory runs into trouble, giving the pseudoscientist’s initial claims a spurious sense of empirical boldness. Again, we can distinguish two subtypes:

3.5.1            Tailoring around the phenomena

In the first subtype, the pseudoscientist invokes an invisible cause that exactly produces an observational pattern of apparent failure, thus protecting the theory from refutation. An extreme example of this strategy is the so-called Omphalos hypothesis by Philip Gosse (1857), a variant of creationism according to which God forged all the geological evidence for an ancient universe to test our faith. In fact, this is a limiting case of a conspiracy theory, in which there is only ‘inverted’ evidence for the theory: all the observations point in the direction of an old universe, which is exactly what one would expect from a deceitful divine being intent on testing our faith.

One among many interesting examples of this immunizing strategy in parapsychology is the idea of negative psi emitted by skeptical minds and experimenters in general (Wiseman 2010), which is a popular excuse when psi experiments fail (see for example Sheldrake 1995). Some authors have given it impressive labels like “catapsi”, which is defined as “the generation of ‘static’ that cancels out regular psi powers within its range”. (Bonewitz 1989, 55) The idea that the presence of inquisitive minds disturbs paranormal phenomena already occurred to Franz Anton Mesmer and his fellow magnetizers, who believed that the skeptical presence weakened the force of the magnetic fluid. The instructions for magnetizers of Joseph P. F. Deleuze were clear enough: “Never magnetize before inquisitive persons!” (quoted in Mackay 1974 [1841], 290)

Parapsychologists have also invented the error phenomenon, which refers to the finding that, when there is an error in the methodology or procedure of an experiment, this leads to better results, because these errors tend to activate psi (Humphrey 1996, 152). The famous psychical researcher John Beloff argued that psi phenomena are “actively evasive” (Beloff 1994, 7) and he has termed the “decline effect” (1994, 11) to describe the puzzling tendency of psychics to lose their powers as they are tested more extensively. Some parapsychologists have hypothesized that the primary function of psi is to “induce a sense of mystery and wonder”, which allegedly explains its elusive character. (Kennedy 2003, 67) Several other elaborate immunizing strategies have been devised for explaining why psi seems to actively avoid corroboration, some of which border on paranormal conspiracy theories (for an overview and discussion, see Kennedy 2001). Again, insofar as one takes these concepts to be an integral part of parapsychology, they are no longer immunizing strategies, but they have to be characterized as full-fledged defense mechanisms.

However one decides the question, all of these fanciful concepts and explanations have one thing in common: they seem designed so as to mimic exactly the observations one would expect if the alleged psi phenomena were due to deception, trickery and methodological defects. They function as simple escape clauses for experimental failure, rendering psi theory immune from falsification.

3.5.2            Imponderabilia

The second subtype is related to the first, but with a different emphasis. Sometimes a pseudoscientist belatedly adds an extra factor to his theory that confounds the initial expectations it generated. A good example is the astrologer’s belated invocation of the formation of stars at the moment of conception – which is of course very hard to determine – when his prediction on the basis of the birth date has failed. Another example is the amusing suggestion by believers in Bigfoot that the creature is possibly “extradimensional”, so that any failed attempt to catch it can be explained by arguing that Bigfoot has escaped “into another dimension” (Zuefle 1999, 27; for the same trick with aliens, see for example Mack 1995).

Freudian psychoanalysis contains a host of escape clauses and methodological joker cards that make the theory eminently resilient to potential disconfirmations (Cioffi 1998; Esterson 1993). To give just one example, consider the way in which the already reviewed ‘quantitative factor’ in the patient’s libidinal economy confounds empirical expectations initially engendered by the theory. Cioffi quotes several passages in which Freud leaves the reader with the impression that he has offered assessable hypotheses about the traumatic sexual events that predispose one to neurotic illness. However, on later occasions Freud admits that some people who fall ill have experienced none of these events, after which he resorts to the imponderable quantitative factor that can only be inferred ex post facto. As Cioffi (1998, 119) writes, “our hopes that Freud might be placing a limit on the kinds of events or states which are conducive to the onset of neurosis and might then go on to tell us what these are, are dashed” when we read this kind of pseudoscientific elaborations of the theory.

4                 Remarks

4.1         Theory-as-such and immunizing strategies

In philosophy of science, some authors have emphasized that it is imperative not to confuse the theory-as-such with the immunizing tactics of its defenders (Grünbaum 1979; Grünbaum 2008). In regard to Freudian psychoanalysis, Adolf Grünbaum has insisted that the falsifiability of the theory-as-such be distinguished from the tenacious unwillingness of some psychoanalysts to face adverse evidence. Although we agree that, inasmuch as possible, it is important to make the distinction Grünbaum insists on, in general it is not clear who is authoritative to decide where the theory-as-such ends and where the pseudoscientific immunizing strategies of its defenders begin (Cioffi 1998, 300)Consequently, there is often no objective way to distinguish immunizing strategies from internal defense mechanisms.

For example, the immunizing strategies used by parapsychologists to account for apparent experimental failure in fact follow naturally from the intrinsically elusive nature of the alleged psi force. Who has the authority to decide whether this characterization of psi has nothing to do with proper parapsychology, or whether it is an integral part of parapsychological theory? When is the belated deflation of a theoretical claim really a revision of the original, and when is it just an elucidation based on equivocations that were always there? And what if immunizing gambits and conceptual joker cards emanate directly from the core conceptual structure of the theory, as in Freudian psychoanalysis?

Although reasons of space prevent us from providing a detailed discussion of this problem, we hope that our analysis has at least challenged the idea that the purported distinction between the theory-as-such and the immunizing tactics of its advocates is a straightforward matter.

4.2         Strategic Deliberations?

Throughout this paper, we have used intentional language such as ‘strategies’, ‘evasions’ and ‘maneuvers’ for describing the ways in which pseudoscience and other belief systems are immunized against disconfirming evidence and rational criticism. However, elsewhere (Boudry and Braeckman [unpublished]) we have argued in detail that, in the eyes of believers, these arguments may be entirely convincing. Indeed, the overall impression of strategic convenience we are left with when confronted with immunizing strategies and defense mechanisms may well derive from the latter’s internal epistemic rationale, rather than from conscious deliberation and strategic planning on the part of believers.

5                 Conclusions

In this paper, we reviewed several ways in which a belief system can achieve epistemic invulnerability against falsification and rational criticism: (1) the use of conceptual equivocations & moving targets, either through the technique of multiple endpoints or that of deflationary revisions; (2) the postdiction of invisible causes and unassessable effects; (3) the double evidential standard of conspiracy thinking, including the practice of explaining disbelief; (4) the practice of changing the rules of play in a rational debate and thus short-circuiting any form of criticism; (5) the invocation of invisible escape clauses, either by tailoring the theory around the phenomena or by invoking imponderable causal factors that confound expectations. As we noticed throughout our discussion, these techniques can be found across widely different domains: parapsychology, pseudo-philosophy, belief in magic, conspiracy theories, alternative medicine, religious cults etc..

At the outset of this paper, we have distinguished immunizing strategies, which are brought forward by proponents from without a belief system, and epistemic defense mechanisms, which are structural parts of the belief system itself. However, in running through our classification, we have found that this distinction is sometimes difficult to maintainFirst, an ad hoc elaboration that was introduced at some point to rescue belief system from apparent falsification may gradually develop into an integral part of that belief systemIn this way, the distinction between immunizing strategies and epistemic defense mechanisms is blurred. Second, although in some cases an evasive maneuver can be easily detached from the theory-in-itself, in other cases escape maneuvers were already implicit in the conceptual structure of the theoryContra Grünbaum, there is not always clear point at which the theory-as-such ends and the immunizing tactics of its defenders begin.

The pervasiveness of immunizing strategies and epistemic defense mechanisms in pseudoscience goes some way toward explaining why these belief systems are so resilient in the face of adverse evidence, and why rational arguments are generally unavailing in debating believers (Boudry and Braeckman [unpublished])It seems that proponents of these belief systems are in fact well-prepared to withstand the impact of empirical refutation and the force of critical argument. Hopefully, further study along these lines will contribute to a better understanding of the enduring popularity and tenacity of belief systems that are highly implausible or impossible from a scientific perspective.

 

 

Acknowledgements

The authors would like to thank Stefaan Blancke, Filip Buekens and Massimo Pigliucci for stimulating discussions and comments. This paper was presented at the Fourth Conference of the Dutch-Flemish Association for Analytic Philosophy at the Catholic University of Leuven (2010).

 

References

 

Behe, M. J. (2006). Darwin's black box : the biochemical challenge to evolution (10th Anniversary Edition). New York, NY: Simon and Schuster.

 

Beloff, J. (1994). Lessons of history. Journal of the American Society for Psychical Research88 (7), 7-22.

 

Benassi, V. A., B. Singer, et al. (1980). Occult belief: Seeing is believing. Journal for the Scientific Study of Religion19 (4), 337-349.

 

Bonewitz, I. (1989). Real Magic. York Beach (Maine), Samuel Weiser,.

 

Boudry, M., S. Blancke, et al. (2010). Irreducible Incoherence - a look into the pseudoscientist’s conceptual toolbox (under review). Quarterly Review of Biology.

 

Boudry, M. and J. Braeckman ([unpublished]). How Convenient! - The Epistemic Rationale of Self-validating Belief Systems.

 

Boudry, M. and F. Buekens ([unpublished]). The Epistemic Predicament of a Pseudoscience: Social Constructivism Confronts Freudian Psychoanalysis.

 

Boyer, P. (1994). The naturalness of religious ideas : a cognitive theory of religion. Berkeley (Calif.): University of California press.

 

Cioffi, F. (1998). Freud and the Question of Pseudoscience. Chicago: Open Court.

 

Clarke, S. (2002). Conspiracy theories and conspiracy theorizing. Philosophy of the Social Sciences32 (2), 131.

 

Crews, F. C. (1986). Skeptical engagements. Oxford: Oxford University Press.

 

Derksen, A. A. (1993). The seven sins of pseudo-science. Journal for General Philosophy of Science24 (1), 17-42.

 

Esterson, A. (1993). Seductive Mirage: An Exploration of the Work of Sigmund Freud. Chicago: Open Court.

 

Foster, J. G. (1971). Enquiry into the Practice and Effects of Scientology. London, Her Majesty's Stationery Office.

 

Freud, S. (1924). Collected papers. Vol. 2. London: The Hogarth Press Ltd and The Institute of Psycho-Analysis.

 

Freud, S. (1957). The standard edition of the complete psychological works of Sigmund Freud. Vol. 11, (1910) : Five lectures on psycho-analysis, Leonardo da Vinci, and other works. London: The Hogarth Press and the Institute of Psycho-analysis.

 

Fusfield, W. D. (1993). Some Pseudoscientific Features of Transcendental-Pragmatic Grounding Projects. H. Albert and K. Salamun. Amsterdam-Atlanta, Mensch und Gesellschaft aus der Sicht des kritischen Rationalismus.

 

Gellner, E. (1985). The Psychoanalytic Movement: The Cunning of Unreason. London: Paladin.

 

Gilovich, T. (1991). How we know what isn't so : the fallibility of human reason in everyday life. New York (N.Y.): Free press.

 

Gordon, J. S. (1996). Manifesto for a new medicine. Reading, MA: Addison-Wesley.

 

Gosse, P. H. (1857). Omphalos: an attempt to untie the geological knot. London: J. Van Voorst.

 

Gregory, R. L. (1997). Knowledge in perception and illusion. Philosophical Transactions of the Royal Society B: Biological Sciences352 (1358), 1121.

 

Grünbaum, A. (1979). Is Freudian psychoanalytic theory pseudo-scientific by Karl Popper's criterion of demarcation? American Philosophical Quarterly16 (2), 131-141.

 

Grünbaum, A. (2008). Popper's Fundamental Misdiagnosis of the Scientific Defects of Freudian Psychoanalysis and of their Bearing on the Theory of Demarcation. Psychoanalytic Psychology25 (4), 574-589.

 

Guttentag, O. E. (1940). Trends toward homeopathy: Present and past. Bulletin of the History of Medicine, 1172–93.

 

Hines, T. (2003). Pseudoscience and the paranormal (2nd ed.). Amherst, NY: Prometheus Books

 

Humphrey, N. (1996). Soul searching : human nature and supernatural belief. London: Vintage.

 

Jacobs, D. M. (1998). The Threat: The Secret Agenda: What the Aliens Really Want … and How They Plan to Get it. New York: Simon & Schuster.

 

Keeley, B. L. (1999). Of conspiracy theories. The Journal of Philosophy96 (3), 109-126.

 

Kennedy, J. E. (2001). Why is psi so elusive? A review and proposed model. Journal of Parapsychology65 (3), 219-246.

 

Kennedy, J. E. (2003). The Capricious, Actively Evasive, Unsustainable Nature of Psi: A Summary and Hypotheses. The Journal of Parapsychology67 (1), 53-75.

 

Kukla, A. (2000). Social constructivism and the philosophy of science. New York: Routledge.

 

Lakatos, I. (1968). Criticism and the methodology of scientific research programmes, Harrison & Sons, Ltd.

 

Lakatos, I. and A. Musgrave (1970). Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press.

 

Laudan, L. (1983). The demise of the demarcation problem. Physics, Philosophy, and Psychoanalysis: Essays in Honor of Adolf Grünbaum. R. S. Cohen and L. Laudan. Dordrecht, D. Reidel: 111–28.

 

Mack, J. E. (1995). Abduction : human encounters with aliens. London: Simon and Schuster.

 

Mackay, C. (1974 [1841]). Extraordinary popular delusions and the madness of crowds. New York: Barnes & Noble Publishing.

 

Marks, D. (2000). The psychology of the psychic. Amherst (N.Y.): Prometheus books.

 

Morris, H. M. (1963). Twilight of Evolution. Grand Rapids, Mich: Baker Pub Group.

 

Park, R. L. (2002). Voodoo science: the road from foolishness to fraud. New York: Oxford University Press.

 

Perakh, M. (2004). Unintelligent design. Amherst (N.Y.): Prometheus books.

 

Popper, K. R. (2002). Conjectures and refutations: The growth of scientific knowledge. London: Routledge.

 

Randi, J. (1981). Selective test selection. Skeptical Inquirer5, 12-13.

 

Shackel, N. (2005). The vacuity of postmodern methodology. Metaphilosophy36 (3), 295-320.

 

Sheldrake, R. (1995). Seven experiments that could change the world: a do-it-yourself guide to revolutionary science. New York, NY: Putnam Publishing Group.

 

Shermer, M. (2002). Why people believe weird things : pseudoscience, superstition, and other confusions of our time. New York: A.W.H. Freeman/Owl Book.

 

Williams, R. J. (1980). Biochemical individuality. Austin, TX: University of Texas Press.

 

Wiseman, R. (2010). 'Heads I Win, Tails You Lose'. How Parapsychologists Nullify Null Results. Skeptical Inquirer34 (1), 36-39.

 

Zuefle, D. M. (1999). Tracking Bigfoot on the Internet. Skeptical Inquirer23, 26-29.

 

Zygmunt, J. F. (1970). Prophetic Failure and Chiliastic Identity: The Case of Jehovah's Witnesses. The American Journal of Sociology75 (6), 926-948.

 

 


 


[1] We are aware that our metaphor of theoretical ‘immunization’ does not fully accord with the mechanisms of active immunization in medicine (vaccination), in which microbes are introduced in the body so as to enable its natural immune system to produce antibodies. The analogy is restricted to the fact that something from outside the system gets introduced as a means of protection, but the type of mechanism is, of course, very different.

[2] It is a well-known psychological finding that people have difficulties assessing the specificity of ambiguous statements once they have found a fitting interpretation. For example, people will rate the results of a bogus personality test as accurate description of themselves, even if these results contain only vague and ambiguous claims that are applicable to virtually anyone, a phenomenon that is known as the Barnum effect or Forer effect.

[3] For an analogy between conspiracy thinking and Freudian psychoanalysis from an epistemological perspective, see (Boudry and Buekens [unpublished]).

[4] It is not even clear that ‘we’ concocted those arguments rather than a mental entity that is independent from ‘us’, which is precisely what caused Wittgenstein to remark that Freud had made an “abominable mess” of the reasons and causes of our behavior.

[5] According to Popper (2002), in contrast with Freudian psychoanalysis, Marx’s initial theory was predictive and not without scientific merits, and it degenerated into pseudoscience only when some of his defenders resorted to ad hoc revisions and immunizing tactics.

[6] Ron L. Hubbard wrote: “Don't ever defend. Always attack. Find or manufacture enough threat against them to cause them to sue for peace. Originate a black PR campaign to destroy the person's repute and to discredit them so thoroughly they will be ostracized...." (Foster 1971)

[7] The E-meter is an instrument used by Scientologists to measure stress and detect engrams.

[8] The problem is also similar to the Mannheim paradox: if all discourse is ideological, how is it possible to have non-ideological discourse about ideology?


Comments