Manifesto for a Science of Clinical Psychology

Richard M. McFall
Indiana University

This article originally appeared in The Clinical Psychologist, 1991, Volume 44, Number 6, 75-88, and is reprinted with permission.

ABSTRACT

The future of clinical psychology hinges on our ability to integrate science and practice. Pointing to quality-control problems in the field, the author proposes that clinical psychologists adopt a Manifesto, consisting of one Cardinal Principle and two corollaries, aimed at advancing clinical psychology as an applied science. The rationale behind the proposed Manifesto, and the implications of the Manifesto for practice and training in clinical psychology are presented

 

Traditionally, this Presidential Address has been devoted to a discussion of the speaker's personal research interests. I am deviating from that tradition, focusing instead on a topic of more general concern: the future of clinical psychology, Section III's mission in shaping that future, and an agenda for pursuing that mission into the 1990s.

The full, official name of Section III was carefully chosen by our founders: Section for the Development of Clinical Psychology as an Experimental/Behavioral Science. With this ungainly name, the founders ensured that there would be no confusion about the group's aims and values. Footnote 1 In this respect, the Section is unlike most other organizations in psychology, which tend to reflect narrower content interests or theoretical preferences. Section III was founded for the sole purpose of building a science of clinical psychology, with no allegiances to any particular population, content, or theory.

What does Section III actually do to help develop clinical psychology as an experimental/behavioral science? Among other things, we send a representative to the Division 12 Council, hold annual elections, collect a modest amount of dues, conduct periodic membership drives, publish a quarterly newsletter, publish directories of internships and training programs, organize programs for the annual APA convention, give annual awards to a Distinguished Scientist and to the author of an outstanding published dissertation, and hold a business meeting at the annual APA convention. The rest of the time, our executive committee keeps an eye on unfolding events in clinical psychology and responds appropriately to whatever opportunities or threats may arise.

It would be fair, I think, to characterize Section III as an organization that has preferred to promote science primarily by setting an example. Membership in Section III has been more a declaration of one's. values than a commitment to any activities. Over the years, the Section's membership roster has read like the "Who's Who" of empirically-oriented clinical psychologists, with representatives from a variety of content areas and scientific perspectives. But our members would rather do science than tam about it or get involved in poli struggles over it. Section III members have tended to be too busy advancing scientific knowledge through their own research on specific problems to spend much time on general causes and crusades.

Perhaps the time has come, however, for Section III members to take a more active role in building a science of clinical psychology. Specifically, I believe that we must make a greater effort to differentiate between scientific and pseudoscientific clinical psychology and to hasten the day when the former replaces the latter. Section III could encourage and channel such activism among its members - and among clinical psychologists generally - by developing and publishing a "Manifesto," which would spell out clearly, succintly, and forcefully what is meant by "a science of clinical psychology," and outline the implications of such a science for clinical practice and training.

What follows is my draft proposal of such a Manifesto for a Science of Clinical Psychology. On its face, it is deceptively simple, consisting of only one Cardinal Principle and two Corollaries, but its implications for practice and training in clinical psychology are profound. I am not so foolish as to expect that everyone will agree with my analysis of the situation or with all of my proposal. If I focus attention on Section III's mission and stimulate constructive discussion of how best to achieve this mission, however, then I will have served a worthwhile purpose. Footnote 2

Cardinal Principle: Scientific Clinical Psychology Is the Only Legitimate and Acceptable Form of Clinical Psychology

This first principle seems clear and straightforward to me-at least as an ideal to be pursued without compromise. After all, what is the alternative? Unscientific clinical psychology? Would anyone openly argue that unscientific clinical psychology is a desirable goal that should be considered seriously as an alternative to scientific clinical psychology?

Probably the closest thing to a counterargument to this proposed Cardinal Principle is the commonly offered rationalization that science doesn't have all the answers yet, and until it does, we must do the best we can to muddle along, relying on our clinical experience, judgment, creativity, and intuition (cf Matarazzo, 1990). Of course, this argument reflects the mistaken notion that science is a set of answers, rather than a set of processes or methods by which to arrive at answers. Where there are lots of unknowns-and clinical psychology certainly has more than its share-it is all the more imperative to adhere as strictly as possible to the scientific approach. Does anyone seriously believe that a reliance on intuition and other unscientific methods is going to hasten advances in knowledge? The systematic procedures of science represent the best methods yet devised for exploring the unknown. There are no dose competitors. This is the rationale behind the Cardinal Principle of my proposed Manifesto.

So the alternative to scientific clinical psychology probably is not unscientific clinical psychology. Are there any other alternatives or contrasts? The most frequently mentioned is Clinical Practice. The dichotomy between science and practice is the classic one-the one codified in the Boulder Model of clinical training with its hyphenated characterization of clinical psychologists as "scientist-practitioners." The implication commonly attributed to the hyphenated Boulder Model is that there are two legitimate types of clinical psychology: clinical science and clinical practice.

This is the dichotomy one hears, for example, from undergraduates who are applying to graduate training programs in clinical psychology and are struggling with making what they perceive to be the difficult, but necessary, career choice between science and practice. When I counsel these undergraduates, I try to per suade them that they are not framing the issue correctly-that there really is no choice between science and practice. I tell them that all clinical psychologists must be scientists first, regardless of the particular jobs they fill after they earn their degrees; that becoming a clinical scientist does not mean that they are committed to working in a laboratory or university; and that choosing not to receive the best scientific training possible, by purposely opting for a training program that does not emphasize scientific training, means that they will not beprepared to do any form of psychological activity as well. What I am saying to them, of course, is that all forms of legitimate activity in clinical psychology must be grounded in science, that all competent clinical psychologists must be scientists first and foremost, and that clinicians must ensure that their practice is scientifically valid.

Regrettably, many students dismiss my advice. They are convinced by the official pronouncements of psychological organizations, the characterizations of clinical psychology put forward by prominent textbooks, and the depictions of clinical psychology promulgated by other psychologists with whom they consult that the conventional distinction between scientists and practitioners is the correct one and that my counsel is completely out of touch with reality. My advice scares them, I suspect. Their futures are on the line, after all, and they are not about to lose out by following the advice of someone who seems so at odds with the dominant view.

It would go beyond the scope of this presentation to trace the history of clinical psychology's split personality, as manifested in the Boulder Model, but psychologists committed to science somehow have allowed the perspective they represent to be characterized as just one of the acceptable alternatives within clinical psychology, with no greater claim to legitimacy or primacy than any other. Look at the status of Section III within APA's Division of Clinical Psychology, for instance. Section III is just one of six sections within the Division, the others being special interest groups focusing on Clinical Child Psychology (1); Clinical Psychology of Women (IV); Pediatric Psychology (V); Racial/Ethnic and Cultural Issues (VI); and Theory, Practice, and Research in Group Psychotherapy (VII). I don't mean to imply any criticism of these other sections, but it strikes me as peculiar that the advocates for a science of clinical psychology have been relegated on the organizational chart to the level of a special interest group.

The development of clinical psychology as a science should be the central mis sion of Division 12, not merely one of its many competing interests. Some might argue, at this point, that Division 12 does regard the promotion of scientific clinical psychology as its foremost mission. I am skeptical, however. If Division 12 adequately represented the scientific interests of clinical psychology, then Section III would be redundant and would disappear. Let me cite just one example of why we are not redundant: it was largely through the alertness and lobbying efforts of Lynn Rehm, Section III's 1989 Chair, that the Division of Clinical Psychology was included as a cosponsor of "Science Weekend" at the 1990 APA convention.

Speaking of Science Weekend, doesn't the idea behind this event strike you as a bit odd? The annual convention of the American Psychological Association meets over a 5-day period, Friday through Tuesday. Two of those 5 days are set aside for Science Weekend, with its special focus on scientific psychology. What does that suggest? That three fifths of the convention will be devoted to unscientific or extrascientific matters? Look at the rest of the APA program and judge for yourself how much weight is given to psychology as a science, as opposed to extrascientific issues. Fortunately, Karen Calhoun and Lynn Rehm, the 1989 and 1991 Chairs of Section III, respectively, are Division 12's program chairs for the 1990 and 1991 APA conventions, thus helping to encourage a strong representation of scientific clinical psychology on the program. I would argue, however, that scientific merit should be the primary selection criterion for all APA program entries, not just the entries scheduled for a special Science Weekend. If this were the case, then it would be meaningless to designate a special weekend for the coverage of science.

The tendency to regard science as only one of the many interests of APA is reflected in an Opinion column in the July 1990 of The APA Monitor by APA President Stanley Graham. Taking what he must have considered to be a conciliatory stance toward the scientists in APA, he said,

There are many groups that represent some special aspect of psychology, but APA is still the organization that represents all of psychology. APA has more scientists, publishes more learned journals, and does more to support psychological research than any psychological organization in the world. As a person largely identified with practice, I am pleased that my presidential year has had, among its major accomplishments, the establishment of an Education Directorate and the enhancement of the Science Directorate. (p. 3)

Reflected in this brief depiction of psychology is the implicit idea that there are several coequal and legitimate constituencies within psychology, scientific psychology being only one-on the same organizational level as psychologists concerned with educational issues or with practice issues. Elsewhere in the same column, Graham's wording seems to suggest that scientific psychologists, research psychologists, and academic psychologists are one and the same-and dis tinguishable from practitioners. If this is how an APA President divides the world of psychology, is it any wonder that undergraduates applying to graduate schools equate scientific clinical psychology with academia and laboratory research, as contrasted with clinical practice? No wonder these students feel that they must choose between science and practice.

Can you imagine a similar state of affairs in any other scientific discipline? Imagine, for instance, an undergraduate chemistry major discussing her choice of graduate schools with her advisor. The student announces that she has decided to apply only to those doctoral programs in chemistry that will require the least amount of scientific training; after all, she explains, she plans to do applied chemical work, rather than basic research, after she completes her degree. Or imagine another student applying to medical school. Because he is interested in applied medicine, he is considering only those schools that require the fewest science courses. These examples are ludicrous; yet academic advisors in psychology regularly hear such views expressed by prospective graduate students in clinical psychology. What makes this situation even more disturbing is that some advisors have come to accept such views of clinical psychology as reasonable and legitimate.

The time has come for Section III - whose mission it is to promote a science of clinical psychology - to declare unequivocally that there is only one legitimate form of clinical psychology: grounded in science, practiced by scientists, and held accountable to the rigorous standards of scientific evidence. Anything less is pseudoscience. It is time to declare publicly that much of what goes on under the banner of clinical psychology today simply is not scientifically valid, appropriate, or acceptable. When Section III members encounter invalid practices in clinical psychology, they should "blow the whistle,' announce that "the emperor is not wearing any clothes," and insist on discriminating between scientific and pseudoscientific practices.

Understandably, the prospect of publicly exposing the questionable practices of fellow psychologists makes most of us feel uncomfortable. Controversy never is pleasant. Public challenges to colleagues' activities certainly will anger those members of the clinical psychology guild who are more concerned with image, profit, and power than with scientific validity. However, if clinical psychology ever is to establish itself as a legitimate science, then the highest standards must be set and adhered to without compromise. We simply cannot afford to purchase superficial tranquility at the expense of integrity.

Some might argue: "But who is to say what is good science and what is not? If we cannot agree on what is scientific, then how can we judge the scientific meritof specific clinical practices?" This is a specious argument. Most of us have become accustomed to giving dispassionate, objective, critical evaluations of the scientific merits of journal manuscripts and grant applications; now we must apply the same kind of critical evaluation to the full spectrum of activities in clinical psychology. Although judgments of scientific merit may be open to occasional error, the system tends to be self-correcting. Besides, this system of critical evaluation is far better than the alternatives: authoritarianism, market-driven decisions (caveat emptor), or an "anything goes" approach with no evaluations at all. it is our ethical and professional obligation to ensure the quality of the products and services offered to the public by clinical psychology. We cannot escape this responsibility by arguing that because no system of quality assurance is 100% perfect, we should not even try to provide any quality assurance at all.

This need for quality assurance is the focus of the First Corollary of the Cardinal Principle in my proposed Manifesto for a Science of clinical Psychology:

First Corollary: Psychological services should not be administered to the public (except under strict experimental control) until they have satisfied these four minimal criteria:

1. The exact nature of the service must be described clearly.
2. The claimed benefits of the service must be stated explicitly.
3. These claimed benefits must be validated scientifically .
4. Possible negative side effects that might outweigh any benefits must be ruled out empirically.

This Corollary may look familiar. It is adapted from recommendations made by Julian B. Rotter in the Spring 1971 issue of The Clinical Psychologist. Unfortunately, Rotter's proposal never received the serious consideration it deserved. If it had, we would be much closer to the goal of a scientific clinical psychology. Explicit standards of practice, such as I am recommending here, are a direct implication of the proposed Cardinal Principle. Adopting such standards is a prerequisite to moving clinical psychology out of the dark ages. Rotter offered this analogy:

Most clinical psychologists I know would be outraged to discover that the Food and Drug Administration allowed a new drug on the market without sufficient testing, not only of its efficacy to cure or relieve symptoms, but also of its short term side effects and the long term effects of continued use. Many of these same psychologists, however, do not see anything unethical about offering services to the public-whether billed as a growth experience or as a therapeutic one-which could not conceivably meet these same criteria. (p. 1)

"Excellence," "accountability," "competence," quality" - these are key concepts nowadays in education, government, business, and health care. It is ironic that psychologists, with their expertise in measurement and evaluation, have played a major role in promoting such concepts in other areas of society while ignoring them in their own back yard. One is reminded of the old saying: "The cobbler's children always need new shoes." The failure to assure the quality of services in clinical psychology - whatever its causes - cannot continue. Rotter (1971) sounded this warning in his concluding paragraph:

If psychologists are not more active and more explicit in their evaluation of techniques of intervention, they will find themselves restrained from the outside (as are drug companies by the FDA) as a result of their own failureto do what ethical and scientific considerations require. (p. 2)

External regulation, whether by government bureaucracies or the courts, is not the only threat. The experiences of U.S. business and industry over the past 45 years might teach clinical psychology something about other dire consequences of ignoring quality control. The story is familiar to everyone by now: U.S. manufacturers, thriving in the boom economy of the postwar period, saw little need to be concerned about the quality of their products, which were selling well the way they were. Meanwhile, the Japanese, struggling to rebuild their economy after the war, took the longer view and decided to build their industrial future on a foundation of quality. They became obsessed with quality. As a result, the Japanese now dominate the world markets in autos, electronics, cameras, and numerous other industries.

Ironically, it was an American, W. Edwards Deming, who taught the Japanese the quality control system that helped them achieve their remarkable industrial superiority (Walton, 1986). Deming's ideas about quality were ignored in the U.S. throughout those postwar years. Only recently - when it was almost too late - has American industry come to realize, as the Ford commercial proclaims, that "Quality is job 1." A recent turnaround in quality at Ford Motor Company is due, in large part, to their better late than never adoption of the same Deming Management Method that had helped the Japanese build higher quality cars than Ford (Walton, 1986).

What is this remarkable Deming Management Method that spawned the Quality Revolution? Stripped of its outer shell, its engine is basically the scientific method, with its requirement for objective specification; quantification and measurement; systematic analysis and problem solving; hypothesis testing; and a commitment to persistent, programmatic, evolutionary development, as opposed to quick fixes, flashy fads, and short-term gains.

What possible relevance does all this have for modem clinical psychology I see direct parallels. In clinical psychology, "validity" is another word for "quality." Clinical services are some of our most important products. An insistence on establishing the validity of clinical services, through the application of the scientific method, is our system of quality control. To the extent that clinical psychologists offer services to the public that research has shown to be invalid, or for which there is no dear empirical support, we have failed as a discipline to exercise appropriate quality control (cf. Dawes, Faust, & Meehl, 1989; Faust & Ziskin, 1988a, 1988b; Fowler & Matarazzo, 1988; Matarazzo, 1990). No matter how many research contributions a particular clinical psychologist may have made, or how knowledgeable that individual may be about research literature or methodological issues, if that individual fails to meet the basic standards of scientific validity in clinical practice, then that individual cannot claim to be practicing as a scientist. Furthermore, to the extent that colleagues allow an individual's unscientific practices to go unchallenged, the scientific status of the profession is diminished accordingly.

Another parallel between the struggles for quality control in industry and in clinical psychology is noteworthy: Psychologists tend to raise many of the same objections to the imposition of scientific standards on clinical psychology as were raised by U.S. companies to the ideals of consumer-oriented design and zero defect production. For example, one objection sure to be raised to the four criteria for quality control proposed in my First Corollary is: "They are unrealistic and unachievable." This objection represents a self-fulfilling prophesy; if accepted as true, it never will be proved wrong, even if it is wrong. One of the biggest obstacles to effective quality control in industry was the deep-seated conviction that significant improvements in product quality were impossible (Walton, 1986). Advocates for increased quality were faced with a barrage of reasons why it couldn't be done, anecdotes about past failures, and rationalizations about inherent flaws in human character. Deming and the Japanese simply ignored such arguments, set out to improve quality, and left the doubters in the dust. We need to do likewise in clinical psychology. Another argument against implementing scientific standards of practice in psychology is: "Although standards certainly are desirable and might be feasible someday, they simply are too costly and impractical to implement at this time." The CEOs of U.S. industries offered similar resistance to immediate change, blaming such short-term pressures as the need to show stockholders a quarterly profit (Walton, 1986). As clinical psychologists, we should recognize such excuses for avoiding change as the impostors that they are. There never seems to be a convenient moment for fundamental change. But viewed in retrospect, feared dislocations seldom are as bad as anticipated, and the resulting improvements usually prove to be worth the price.

I have had personal, real-world experience with the very kind of quality standards for psychological services that I am advocating here. I am a member of the Board of Directors of my local Community Mental Health Center, where I chair the Program Planning and Evaluation Committee. In 1990, we proposed to the full Board that it incorporate into the Center's mission statement and adopt as official Center policy a fundamental commitment to quality assurance: specifically, the Center would provide only those services that have been shown to be effective, according to the best scientific evidence available. I was pleasantly surprised by the positive reception this proposal received from the Board, the Center's administration, and many of the staff. It was adopted by the Board.

Of course, it is one thing to adopt an abstract policy, another thing to make it work. Our Center needed to develop and implement new procedures for the systematic review and evaluation of the scientific validity of all treatments. But the new policy required more than new procedures; it also required increased resolve and courage. The Center's commitment to the new policy was put to a difficult test almost immediately. Based on recent reviews of the research literature on treatment programs for sexual offenders (e.g., Furby, Weinrott, & Blackshaw, 1989) which raised serious questions about the effectiveness of these clinical services, the clinical staff in the Center's treatment program for sex offenders initiated a full review of their program under the Center's new policy. Understandably, there was a strong negative commumity reaction to the possible discontinuation of the program. The courts, for example, were distressed by the prospect of losing the program as a sentencing option for offenders. I am pleased to report that so far the Center has stuck to its policy, is proceeding with its reevaluation of treatment programs (including the sex offenders program), and has begun to consider alternative approaches to handling various patient problems. In the long run, the Center will serve the community best by devoting its limited resources to the delivery of only the most valid programs.

One of the problems facing clinical psychology is that it has oversold itself. As a consequence, the public is not likely to respond charitably when told to adjust its expectations downward. We cannot blame consumers for wishing that psychologists could solve all of their problems. Nor should we be surprised if consumers become upset when told the truth about what psychologists can and cannot do. We should expect that some consumers simply will not accept the truth, and will keep searching until they find someone else who promises to give them what they want. However, the fact that some consumers are ready and willing to be deceived is no justification for false or misleading claims; the vulnerability of our consumers makes it all the more imperative that clinical psychologists practice ethically and responsibly.

Clinical psychologists cannot justify marketing unproven or invalid services simply by pointing to the obvious need and demand for such services, any more than they could justify selling snake oil remedies by pointing to the prevalence of diseases and consumer demand for cures. Some clinicians may ask: "But what will happen to our patients if we limit ourselves to the few services that have been proven effective by scientific evidence?" Snake oil merchants probably asked a similar question. The answer, of course, is that there is no reason to assume that patients will be harmed if we withhold unvalidated services. In fact, in the absence of evidence to the contrary, it is just as reasonable to assume that some unvalidated remedies actually are detrimental to patients and that the with- holding of these will benefit patients.

If the practices of clinical psychologists were constrained, as proposed in my First Corollary, where would that leave us? That is, what valid contributions, if any, might psychologists make to the assessment, prediction, and treatment of Clinical problems? This question highlights the major reason why scientific training must be the sine qua non of graduate education in clinical psychology. Faced with uncertainty about the validity of assessments, predictions, and interventions, clinicians would be required by the First Corollary to reduce that uncertainty through empirical evidence before proceeding to offer such services. Footnote 3 The Corrolary explicitly states that clinical scientists may administer unproven psychological services to the public, but only under controlled experimental conditions. While untested services represent the future hope of clinical psychology and thus deserve to be tested, they also represent potential risks to patients and must be tested cautiously and systematically. Until scientific evidence convincing y establishes their validity, such services must be labeled dearly as "experimental." Footnote 4 Only those psychologists with scientific training and expertise will be in a position to participate in this critical evaluation of clinical services.

It should be added that clinicians-in-training are unproven commodities, as well, even when they are administering services that have been proven to be effective in the hands of experienced clinicians. Therefore, the validity of the services offered by these apprentice clinicians must be evaluated systematically before each individual therapist-an integral component of the clinical service-is moved from the "experimental status" to the "approved" list. Even "approved" and "senior" clinicians must be cognizant of the limits to their personal validities and take an experimental approach to validating changes in their cal roles.

In short, the First Corollary requires that clinicians practice as scientists. This brings us to the Second, and final, Corollary of my proposed Manifesto for a Science of Clinical Psychology:

Second Corollary: The primary and overriding objective of doctoral training programs in clinical psychology must be to produce the most competent clinical scientists possible.

This point follows logically, I believe, from all that has been presented thus far. It also should require little elaboration. In a practical sense, however, it is not entirely dear what the most effective methods are for training clinical psychologists to be scientists. Everyone seems to have opinions about what makes for effective scientific training, but such views seldom are backed by sound empirical evidence. Even where evidence exists, it may exert little influence on the design of clinical training programs. It ought to be otherwise, of course; those who train scientists should be reflexive, taking a scientific approach themselves toward the design and evaluation of their training programs. Unfortunately, the structure and goals of graduate training in clinical psychology tend to be highly resistant to change. Institutional, departmental, and personal traditions, alliances, and empires are at stake, and these tend tomake the system unresponsive to logical, empirical, or ethical appeals. These limits notwithstanding, let me sketch four of the more important issues raised by this Second Corollary.

First, the Boulder Model, with its stated goal of training, "scientist-practitioners," is confusing and misleading. On the one hand, if the scientist and practitioner are synonymous, then the hyphenated term is redundant. On the other hand, if the scientist and the practitioner represent two distinct goals, either as competing alternatives or as separate but complementary components, then this two-headed view of clinical psychologists is inconsistent with the kind of unified scientific training being advocated in the present Manifesto. Therefore, the Boulder Model's dualistic, hyphenated goal should be replaced by one that stresses the unified and overriding goal of training clinical scientists.

Second, scientific training should not be concerned with preparing students for any particular job placements. Graduate programs should not be trade schools. Scientists are not necessarily academics, and persons working in applied settings are not necessarily nonscientists. Well-trained clinical scientists might function in any number of contexts - from the laboratory, to the clinic, to the administrator's office. What is important is not the setting, but how the individual functions within the setting. Training program faculty members need to break out of the old stereotypic dichotomous thinking represented by the Boulder Model. They need to stop worrying about the particular jobs their students will take and focus instead on training all students to think and function as scientists in every aspect and setting of their professional lives.

Third, some hallmarks of good scientific training are rigor, independence, scholarship, flexibility in critical thinking, and success in problem solving. It is unlikely that these attributes will be assured by a checklist approach to required content areas within the curriculum. Increasingly, however, there has been a tendency-prompted largely by the need to ensure that the criteria for state licensing and certification will survive legal challenges-toward taking a checklist approach to the accreditation of graduate training programs in clinical psychology. Too much emphasis has been placed on the acquisition of facts and the demonstration of competency in specific professional - techniques, and too little emphasis has been placed on the mastery of scientific principles; the demonstration of critical thinking; and the flexible and independent application of knowledge, principles, and methods to the solution of new problems. There is too much concern with structure and form, too little with function and results.

Ideally, we would have been taking a scientific approach to answering the question of how best to train clinical psychologists; unfortunately, this has not been done. For the present, then, there simply is no valid basis for deciding what is the "best" way to train clinical scientists in these desired attributes. The political move to homogenize the structure and content of clinical training programs not only is inappropriately premature, but it also is likely to retard progress toward the goal of developing truly effective training programs. The state of knowledge in our field is primitive and rapidly changing; therefore, efforts to establish a required core curriculum for clinical training, based on such uncertain knowledge, would result in "training for obsolescence." Similarly, efforts to standardize prematurely on training program structures and methods simply win perpetuate the status quo, discourage experimentation, and inhibit evolutionary growth. Until we have good evidence that one method of training is superior to any others, how can we possibly decide (except on political or other arbitrary grounds) that all training programs should cover a fixed body of content and technique, follow a set curriculum, or adopt a common structure? Recently, for example, there has been a move to require that accredited clinical training programs provide first-year students with practicum training. This proposed requirement has received considerable support, despite the complete lack of any clear evidence that it would lead to increased scientific or clinical competence in students.

Until we have a valid basis for choosing among the various options, our policy should be to encourage diversity-to "let a thousand flowers grow." Footnote 5 Out of such diversity, we might learn something valuable about effective training methods. Of course, diversity by itself is uninformative; it must be accompanied by sys tematic assessment and evaluation. The ultimate criterion for evaluating a pro gram's effectiveness is how well its graduates actually perform as independent clinical scientists. Thus, program evaluations should focus on the quality of a program's products-the graduates-rather than on whether the program conforms to lists of courses, methods, or training experiences. How a program's graduates perform becomes the dependent variable; program characteristics serve as independent variables. If the aim of our graduate programs is to train clinical scientists, then every program's faculty ought to model scientific decision making when designing and evaluating its program.

Richard Feynman (1985), the Nobel Prize-winning physicist, used the term "Cargo Cult Science" to characterize "sciences" that are not sciences. He drew an analogy with the "cargo cult" people of the South Seas:

During the war (the cargo cult people) saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas--he's the controller--and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. (P.311)

Much of the debate over how best to train scientists in clinical psychology smacks of Cargo Cult Science-preoccupation with superficial details of form, but a failure to comprehend the essence. Many clinical training programs scru pulously follow rituals that they believe to be associated with the successful production of scientists. They design curricula, assign readings, hold discussions, emphasize statistics and research methodology, give tests, require theses and dissertations, arrange for practica and internships, and hold formal rites of passage. But something essential is missing. Scientists don't emerge. Airplanes don't land.

Like the South Sea Islanders, the faculties of clinical training programs cling to the belief that if only they could arrange things properly-improve the shapes of the headphones, improve the sequence of courses-their systems at last would produce results. But their preoccupation with arranging details is like rearranging the deck chairs on the Titanic. When something essential is missing, no amount of tinkering with form will make things work properly.

According to Feynman (1985), one of the essential missing ingredients in Cargo cult Science is "scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty-a kind of leaning over backwards."

If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition .... The idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

This suggests a good place to focus our attention when thinking about how we might improve the quality of graduate g in clinical psychology. As a field, if we fail to display such scientific integrity, how can we hope to be successful in training scientists. No amount of formal classwork will replace the integrity lost by a failure, for example, to challenge exaggerated clients concerning the value of a clinical service. We can give students lectures about professional ethics, but if the lecturers fail to model utter honesty by leaning over backwards to provide a full, fair, critical discussion of psychological theories, research, and clinical practice, then few students will emerge as scientists, few planes will land.

Fourth and finally, for clinical psychology to have integrity, scientific training must be integrated across settings and tasks. Currently, many graduate students are taught to think rigorously in the laboratory and classroom, while being encouraged-implicitly or explicitly-to check their critical skills at the door when entering the practicum or internship setting. Such contradictions in training cannot be tolerated any longer. Training programs in clinical psychology must achieve a scientific integration of research, theory, and practice. The faculties of clinical g programs must assume the responsibility for ensuring that students' practical experiences are integrated with their scholarly, conceptual, and research experiences. Until that happens, there can be no unified scientific training in clinical psychology.

THE MANIFESTO AS A CALL TO ACTION

Different camps within clinical psychology have maintained an uneasy truce over the years, partly out of necessity (in the early days they were allies against the threats of psychiatry) and partly out of convenience, custom, and economic self-interest. But events such as the unsuccessful effort to reorganize APA, the subsequent creation of competitive organizations such as The American Psychol ogical Society (APS), and recent challenges to APA's sole authority to accredit graduate training programs in psychology are examples of the tension, distrust, and conflict that have surfaced among the various camps over the past decade. Change is in the wind; nothing is likely to be quite the same in the future.

Today's clinical psychologists face a situation somewhat like that of the bicy clists in the Tour de France race. We have been riding along at a comfortable pace, all bunched together, warily eyeing one another, worrying that someone might try to get a jump on us and break away from the pack. It has been like an unspoken conspiracy. As long as no one gets too ambitious and tries to raise the standards, we all can lay back and continue at this pace indefinitely. Labor unions have a name for the wise guys who won't go along with the pack: They're called "rate busters." In my more cynical moments, I sometimes suspect that many psychologists view serious proposals for scientific standards in practice and training as a betrayal, rate busting, or breaking away from the pack.

Inevitably, a breakaway will come. Some groups of clinical psychologists will become obsessed with quality, dedicated to achieving it. These psychologists will adopt as their manifesto something similar to the one I have outlined here. When this happens, the rest of clinical psychology-all those who said that it couldn't be done, that it was not the right time-will be left behind in the dust.

The Manifesto I have outlined here is a serious proposal; I was not trying to be provocative. The time is long overdue for a breakaway, for taking seriously the idea of building a science of clinical psychology. I would like to believethat Section III members will be well represented among the group of psychologists that successfully makes the break, when it comes. In fact, I dare to wish that Section III might promote such a break by formally adopting my proposed Manifesto, or one like it, hoisting it high as a banner around which all those who are committed to building a science of clinical psychology might rally.

Author Notes

This paper is based on the author's Presidential Address to Section III of Division 12 of the American Psychological Association, at the 1990 Annual Convention, Boston, MA. The discussion of clinical training issues contains supplemental paragraphs adapted from this author's symposium presentation, at the same APA convention, on "The Future of Scientist-Practitioner Training." The views expressed are the author's, not necessarily those of Section III or its members.

Correspondence should be addressed to Richard M. McFall, Department of Psychology, Indiana University, Bloomington, 47405.

Footnotes

1 In the Spring of 1991, Section III voted to change its name to "Society for a Science of Clinical Psychology." This action represented no change in organizational philosophy but simply was an effort to state the organization's purpose more succinctly. Back to text.

2 Reviewers of an earlier draft of this manuscript made a number of helpful suggestions and raised several questions. In the spirit of encouraging a dialogue about the proposed Manifesto, yet hoping to avoid digressions that might obscure the thread of my original argument, I have summarized the reviewers' questions in footnotes and have offered replies. Back to text.

3 Q. How adequately can conventional research methods, with their reliance on quantitative analyses and group results, answer clinical questions about how best to approach the unique problems of a specific client? A: This question raises the classic debate concerning "idiographic vs. nomothetic" approaches to clinical prediction, where "prediction" incudes the task of choosing, based on estimated results, the most promising treatment for a particular client with a particular set of problems. Despite the intuitive appeal of the idiographic approach, both the empirical evidence and the force of logical analysis unequivocally support the superior validity of the nomothetic approach (e.g., Dawes, Faust, & Meehl, 1989). The specifics of the evidence and arguments on this issue go far beyond the bounds of the immediate presentation. Helping students work through this issue, in fact, is one of the central aims of graduate training in scientific clinical psychology, taking several years and requiring a mastery of demanding material ranging from the concepts of base rates and cutting scores to the accuracy of clinical and actuarial predictions. Contrary to popular opinion, the scientific method, with its quantitative and nomothetic emphasis, consistently does the best job of predicting the optimal treatments for individual cases. Dubious readers are encouraged to start by (retreading Meehl's (1973) collected papers. Back to text.

4 Q. Won't this emphasis on employing only well-documented interventions tend to stifle creativity in the search for even better interventions? A: If "creativity" is equated with "winging it" in therapy, then the emphasis should, indeed, curtail such unwarranted freelance activity. But if "creativity" refers to the systematic development of ever-improving treatment methods, then the recommendations presented here should enhance, rather than stifle, suchcreativity. Without documented treatment standards against which to compare the effects of novel interventions, how would it ever be possible to tell if the new (creative) approaches are any better than the established approaches? The requirement that new approaches beat the current standards before they can be accepted ensures that clinical psychology will show genuine advancement, rather than merely chasing after fads and fashions. Back to text.

5 Isn't there a logical inconsistency here between recommending diversity in clinical training, on the one hand, and recommending that only "the best" therapy be used for a given clinical problem, on the other hand? A: No. In training and therapy alike, when valid evidence indicates that one approach is better than another, we are obligated to choose the "best" approach. (There are exceptions, of course, such as when the costs of the best approach are prohibitive, or when controlled experimental trials are being conducted in an effort to surpass the current best.) Where there is no evidence of a best approach, there are two possibilities: (a) The evidence indicates that doing something is better than doing nothing, in which case choosing any of the comparable options is justified, or (b) the evidence does not indicate that doing something is better than doing nothing, in which case it is not appropriate to proceed. Thus, because we can demonstrate positive gains in the graduates of scientific training programs in clinical psychology (but not necessarily in the area of increased clinical sensitivity, according to Berman & Norton, 1985), it is appropriate that clinical programs continue to offer scientific training, with a diversity of g approaches being tolerated until valid grounds for a preference are found. In clinical practice, there are some problems for which an obligatory best approach has been identified. There are other problems, however, for which no approach has shown incremental validity, making "no intervention" the appropriate choice (except under controlled experimental conditions). Back to text.

REFERENCES

Berman, J.S., & Norton, N.C (1985). Does professional training make a therapist more effective? Psychological Bulletin, 98, 401-407.
Dawes, R.M., Faust, D., & Meehl, P.E. (1989). Clinical versus actuarial judgment. Science, 243, 1668-1674.
Faust, D., & Ziskin, J. (1988a). The expert witness in psychology and psychiatry. Science, 241, 31-35.
Faust, D., & Ziskin, J. (1988b). Response to Fowler and Matarazzo. Science, 241, 1143-1144.
Feynman, R.P. (1985). Surely you're joking, Mr. Feynman! New York: W. W. Norton.
Furby, L, Weinrott, M. R., & Blackshaw, L (1989). Sex offender recidivism: A review. Psychological Bulletin, 105, 3-30.
Fowler, R. D., & Matarazzo, J. D. (1988). Psychologists and psychiatrists as expert witnesses. Science, 241, 1143.
Graham, S. (1990). APA supports psychology in both science and academe. The APA Monitor, 21, 3.
Matarazzo, J.D.(1990). Psychological assessment versus psychological testing: Validation from Binet to the school, clinic, and courtroom. American Psychologist, 45, 999-1017.
Meehl, P.E. (1973). Psychodiagnosis: Selected papers. Minneapolis: University of Minnesota Press.
Rotter, J.B. (1971). On the evaluation of methods of intervening in other people's lives. The Clinical Psychologist, 24, 1-2.
Walton, M. (1986). The Deming Management Method. New York: Perigee.

Comments