Science for Good or Ill

Chandler Davis

(first printed as a booklet in the Waging Peace Series,
Nuclear Peace Foundation, 1990)

Science, seen from outside, may seem like a free lunch: a source of unexpected wealth and cures for what ails us. Or it may seem like a dangerous juggernaut, generating appalling weapons, out of control. From the inside, to us who practice it, science as a whole is scarcely visible; it is simply there as the context for our lives. Only with a special effort can we ask ourselves the big questions.

Clearly, both the simple outsider’s views are right. Science offers the possibility both of great benefit and of great damage. Those who practice science should admit, however reluctantly, that our work can have consequences with huge moral implications which we are not being invited to control. I am one of those who cannot duck the responsibility. We recall that the defendants in the Nuremberg War Crimes Trials were not exonerated on the plea that they were only following orders, and we feel in the same way that we can’t evade the issue on the grounds that we were only following where science led.

Let me give a few examples of efforts by scientists to move toward taking a common moral position. There are many. I’ll start with one in which I was involved.

Hundreds of mathematicians signed a statement which appeared as an ad in the Notices of the American Mathematical Society in 1967, right next to recruiting ads from Lockheed, Litton, the National Security Agency, and so on. The statement read:

Mathematicians: Job opportunities in war work are announced in the Notices, in the Society’s Employment Register, and elsewhere. We urge you to regard yourselves as responsible for the uses to which your talents are put. We believe this responsibility forbids putting mathematics in the service of this cruel war.

It is considered quite bad form in our society to blow the whistle on any activity for which money can be paid. Making such a public statement, we put ourselves on the spot. Some of the objections come from outside; some from other scientists. Here are a few of those objections.

OBJECTION #1: “What are you, anti-science? anti-progress? Science is knowledge, knowledge is power. How could you be against knowing more?”

When they say anti-science, I know what the word means. Hostility to rational knowledge is rife today, and the harmful fallout from technology may feed it. Time was, several centuries ago, that there was an anti-scientific ideology of some importance even among the learned. People sometimes blamed scientists (in the image of the arrogant Dr. Faustus) for trying to understand things that God did not intend mortal humans to understand. Still I don’t see that the call for social responsibility in science is of this nature. I think this objection is, therefore, simply off the subject.

We are not anti-science in general. We do believe that science can be the expansion of our understanding, and if the use of science raises problems of course I am saying that it raises problems we’ll have to deal with them by understanding them, using more knowledge not less.

Those who label anyone anti-scientific who questions science are demanding that we give science carte blanche. But we mustn’t. Science is a human product, and people must control it for human purposes.

A further, subtler point: there’s a hidden bias in this objection. If we exempted science from criticism we would be giving carte blanche not just to science in general something which might be arguable but to science as it stands! We may agree science is on the whole good, but we must be careful not to suppose it is perfect. Even if every answer scientists think they have now were just so (which is too much to hope), one might still wonder whether they have always chosen the best questions to investigate: even true statements may be criticized as trifling or irrelevant.

OBJECTION #2: “You can’t reverse progress. It might have been better not to discover nuclear fission, but you can’t undiscover it.”

Well, no, although sometimes detailed knowledge is almost completely forgotten. If the nations could get their governments to agree that nuclear bomb technology should no longer be used, they might want to stop educating bomb technicians. Practical techniques might become very rusty in one generation. But the principles are very likely to remain available- just as historians who wonder about the crossbow can find enough in the historical and archeological record to build a pretty good working reproduction. And again, we didn’t suggest undiscovering; criticism of some of science’s products doesn’t imply a demand to abolish knowledge.

Still there’s something to it. The fear of science expressed from ancient legends up to today often includes this warning: you can’t put the djinni back in the bottle; you can’t put the lid on Pandora’s box. True. Our choice of research directions affects what we find now, and moreover, it also affects what is available to our successors.

OBJECTION #3: “There’s no such thing as responsibility in scientific research, because when you do the research you can’t foresee what its uses may be.”

This sort of objector points gleefully to the case of Godfrey Harold Hardy. Heartsick to see his fellow mathematicians streaming into war work I’m talking now about the First World War Hardy became a total pacifist. He declared later in life that he was glad that none of his scientific results had ever been of the slightest use to anyone. (A paradoxical stance indeed for this passionate humanitarian: you’d think he would have wanted to be of use.) He was ignoring, perhaps because it is mathematically so simple, the Hardy-Weinberg Law which he contributed to population genetics. He was also ignoring a more spectacular exception, which he didn’t foresee and which few people could have imagined before the 1940s: the so-called Hardy spaces, whose theory he founded in the 1920s, are now so central to what we call linear systems theory that whole conferences are devoted to them, paid for largely by Air Force grants, and populated in considerable part by military researchers.

What do I have to say about Hardy’s guilt in the military uses of his ideas? He would have regarded them as entirely deplorable, and on the whole he might have been right. They do not, however, illustrate the impossibility of knowing ahead of time the applications of one’s research: granted, Hardy didn’t foresee any application of these spaces, but he didn’t try. One of our duties as scientists is to try to perceive the relations between different ideas; it’s not a duty we should be reluctant to carry out, because much of the excitement of science comes right there, in the richness of the connections that appear. I think this point is important and too seldom made, but it’s only a small part of my answer to the objection #3.

If Hardy had understood that one of his spaces had potential practical application by way of linear prediction theory, it might have soured him, but most of us don’t share his extreme rejection of applications. When linear prediction theory was developed in the late 1930s, by Andrei N. Kolmogorov in the Soviet Union and Norbert Wiener in the United States, they did see the connections, from Hardy spaces through probability theory to communication and missile detection. The applicability was intrinsic to their investigation. What’s more, they were willing participants in military research against the Nazis. Though they broke relations with the military after the Second World War Wiener in a drastic open letter in the Atlantic Monthly the military didn’t break relations with the ideas they had given it. The djinni was not put back into the bottle.

One can imagine Hardy saying to these two gentle, democratic men, as concerned and humanitarian as he was, “You see, my point is proved. You gave your ideas knowing they had uses, but you couldn’t restrict them to non-military uses. When the generals got them, you couldn’t restrict the generals to using them only against the Nazis; and they used them for each of your countries against the other.” If he happened to notice me, I would come in for sharper reproach because I had the benefit of more hindsight. Some mathematical results I got in the 1970s after so much had been written on scientific responsibility, after the ad against war work which I just proudly quoted are also often cited at those same military-supported conferences.

Actually, some of the signers of that 1967 war work ad were simultaneously working on military contracts! Two of them made the newspapers at the time, when the generals threatened to cut off their contract money in reprisal for signing the ad. This outraged many professors as an interference with their freedom of expression. In the end, the threat to cut their support off was not carried out. We might ponder also whether they were being consistent. They undoubtedly believed there would be no serious military uses of the research they were doing on weather modification. The military command, however, thought weather modification was a directly military topic, and indeed attempted to turn cloud-seeding into a weapon against the Vietnamese.

OBJECTION #4: “You have no right to censor science anyway. What are we scientists? Just the employees who do the work. Decisions on what to do with the results of scientific work are made by society at large; it would be elitist for us to claim exclusive rights to them just because we have this special role of generating the ideas.”

This line has annoyed me for years. It bothers me because it’s wrong, morally and factually, and at the same time it’s so close to right that I hate to have to oppose it.

It’s morally wrong, in the first place. If my work, or some part of it, is cruel and anti-human, that touches me more closely than it touches anybody else because I am doing it; and that gives me a special responsibility which nobody should try to get me to pass off. This is not an elitist attitude, it applies to every participant in an anti-human project.

The objection is factually wrong, in the second place. Scientists are not censoring science or the uses of science, in practice, but others who have less to do with creating it are. Let’s keep them in mind.

The owners of the Johns Manville Corporation decided to keep on marketing asbestos products for decades after they knew it was causing thousands of cancers; on a larger scale, the owners of RJR Nabisco (whoever owns it this week!) continue to push tobacco decades after they knew it was causing hundreds of thousands of cancers (and as the American consumer begins erecting a defense against it, the tobacco corporations line up diplomatic pressure from the Bush administration to induce Asian governments to import the stuff. For a third example, the decision to obliterate Hiroshima and Nagasaki was not made by the scientists (many of them were vocally against it, even Edward Teller), but it wasn’t made by society at large either: it was made secretly by President Harry Truman and his cabinet, and even long afterward, it remains hard to get a straight account of their motives.1

And yet something about the objection is right. Decisions on the allocation of research resources, decisions on science education, and decisions on the course of technology really ought to be made by society at large. I agree with that. Only it is no argument against my raising the problem of social responsibility of science. Quite the reverse: it’s an argument for talking about it more. In this essay, I’m going out of my way to raise it with non-scientists as well as scientists. If scientists are more intimately involved than the rest, it doesn’t follow that I want to exclude the rest. I don’t.

Even if decisions about supporting and using science were concentrated outside the scientific community, we may perhaps legitimately insist on our right to negotiate as a group with our paymasters. When relations with the military came up in the American Mathematical Society in 1987, it was in this form. Hundreds of individual members petitioned for a policy statement (1) calling on the Society to seek more non-military funding for the nation’s mathematical research, and (2) directing the Society’s officers to do nothing to further mathematicians’ involvement in the Strategic Defense Initiative. The proposal was submitted to a referendum of the Society’s membership in 1988. With over 7000 voting (about twice the number usually participating in election of officers), the statements passed by votes of 4034 to 2293 and 5193 to 1317 respectively. Now, if only the officers of the Society can be brought to act accordingly.

Is it clear that such group stands are legitimate? Perhaps the only proper form of resistance to misuse of science is exercise of individual conscience? This is sometimes said directly as in objection #5 below, but more often implied.

OBJECTION #5: “Those of you who don’t think you can conscientiously do certain scientific work certainly have a right to freedom of conscience. No need to mount campaigns, just vote with your feet. You can just change your field of science, or even change to non-technical employment.”

Well, sure we can. A lot of us do. John Gofman left his job in nuclear medicine so he could publish freely his own estimates, not his employers’, of radiation damage. Robert C. Aldridge quit as a missile design engineer in order to publish strategic weapon analyses for all of us to share. Molecular biologists leave the lab to organize agronomical stations for Central American farmers much too poor to pay for them.

Less extreme cases abound: many of my colleagues and students have switched from one “normal” position to another to reduce their involvement with destructive technology or increase the constructive utility of their work. I mentioned the mathematicians’ ad of 1967; one of its signers moved from Lockheed to a university, one moved from Sandia Corporation to another university. Recently one of the X-ray laser whizzes left the Lawrence Livermore Labs in search of less bellicose research topics.

And yet, I wouldn’t accept the notion that responsibility in science should mean only for some individuals to opt out. That would be an artificial limitation. Nevertheless, our campaigns are often to disseminate individual statements of conscience, and this may have the virtue of clarity. I’ll give a few more examples of such statements.

An organization called the Committee for Responsible Genetics, led by the MIT biologist Jonathan King among others, has been circulating this statement internationally:

We, the undersigned biologists and chemists, oppose the use of our research for military purposes. Rapid advances in biotechnology have catalyzed a growing interest by the military in many countries in chemical and biological weapons and in the possible development of new and novel chemical and biological warfare agents. We are concerned that this may lead to another arms race. We believe that biomedical research should support rather than threaten life. Therefore, WE PLEDGE not to engage knowingly in research and teaching that will further the development of chemical and biological warfare agents.2

Bearing in mind the Hippocratic Oath traditionally taken by medical doctors, we might put such statements in broader terms. If physicians state their obligation to use their specialty only for the good of humanity, why not other professions? Consider the following oath proposed at an international conference in Buenos Aires in 1988 by Guillermo Andrés LeMarchand:

Aware that, in the absence of ethical control, science and its products can damage society and its future, I, ……, pledge that my own scientific capabilities will never be employed merely for remuneration or prestige or on instruction of employers or political leaders only, but solely on my personal belief and social responsibility based on my own knowledge and on consideration of the circumstances and the possible consequences of my work that the scientific or technical research I undertake is truly in the best interest of society and peace.

This statement has been signed, among others, by a large majority of the 1988 graduating class in Buenos Aires, and by many scientists internationally.

The following statement was disseminated at Humboldt State in California in 1987, and subscribed to since then by large contingents at graduations there and at many other universities:

I, ……, pledge to thoroughly investigate and take into account the social and environmental consequences of any job opportunity I consider.

A number of prominent scientists have publicly subscribed to the following Hippocratic Oath for Scientists, Engineers and Technologists:

I vow to practice my profession with conscience and dignity;

I will strive to apply my skills only with the utmost respect for the well-being of humanity, the earth and all its species;

I will not permit considerations of nationality, politics, prejudice or material advancement to intervene between my work and this duty to present and future generations;

I make this Oath solemnly, freely and upon my honour.3

What is really meant by such pledges? Is it enough for those who feel that way to move to a different job?

It’s not enough. The reason we raise the issue in scientific societies, many members of which may already be doing clearly constructive work, and in graduating classes of students, and in general audiences, is that science and technology are social products. The technology of Zyklon-B for the Nazis’ gas chambers, or of binary nerve gas for today’s weapons, is a product of scientific lore built up by an intellectual community. The social responsibility of the biological scientists is not merely to get somebody else’s name than one’s own attached to the job! Just as the Hippocratic Oath should make each doctor repudiate Nazi-style experimentation on human subjects by all doctors, biological responsibility should mean that each biologist refrains from misuse of the science and gets others to refrain too. Responsibility should be applied collectively.

Unrealistic sure. The level of mutuality I’m imagining here is unattainable now. The vision is of a process of deepening a community code of ethics over many stages. As the need is felt more widely, it can happen. Right now we see medical ethics being reworked, with great attention from thousands of specialists. Scientific and engineering ethics can be developed the same way: publicly and worldwide. So far, it is lagging way behind.4

COMPLEXITY

I’ve been understating the task. To this point, I’ve been speaking as if it were typically easy to see the difference between healthy and noxious science. As if the only thing lacking were good will and honesty. No, the big problems are really problematic. Answers aren’t clear. And even when hard work makes them clear, there may still be battles to get the ethical thing done.

Sometimes those of us who are sounding wake-up calls, following the example of Rachel Carson, give the impression that once we all wake up the way will be plain. We do that by emphasizing a glaring incongruity, focusing on it so everyone will see it is serious, at the cost of making it simple whereas, really, its complexity is part of what makes it so serious.

Let me try to set the record straight a little by dwelling on how we may try to cope with complexity.

One way to bring order into a confusingly complex problem is “cost-benefit analysis.” Analysts try to weigh the power to be drawn from the Aswan High Dam or the proposed Sardar Sarovar Dam against the damage caused by flooding of farmland upstream, loss of silting downstream, and destruction of river and sea ecosystems. In the case of British Columbia, they weigh the value of the aluminum smelted with the hydroelectric power against the value of the salmon fisheries destroyed, place dollar values on each item, and add up the balance. More ambitiously, one may calculate the dollar cost of revising power generation methods worldwide so as to restore the carbon dioxide balance (the total cost is in the trillions). Such computations have great potential, but keep in mind their limitations.

First, they are no more precise than the inputs, and it’s very hard to know some of the numbers going in. I’ve never done such a labor of marshalling quantitative data to be synthesized, and I respect the audacity of those who do; my skepticism is not ungenerous to them, I hope; but every user of such analyses can see that skepticism is a necessary part of using them sensibly.

Second, I sometimes insist on asking, whose dollars. The aluminum company owns its refinery and makes a profit. If the analysis shows that the costs outweighed the benefits, does that mean the company owes the salmon-fishing coastal Indians damages for the fish they don’t have any more? If not, why not? If so, then the refinery was a bad investment or the aluminum was underpriced. (There’s a can of worms! If the economic realities are different from what the market saw at the time, then the dollar figures have to be revised throughout.) Similarly, we hear talk of whether “industry” can “afford” eliminating chlorofluorocarbons. Come now! If the physics of the ozone hole is as now believed, then cost-benefit analysis will show on the contrary that “industry” that is, the owners of Hoechst and the other producers of CFCs can’t afford to produce another gram of them. Just let all the billions of people who will lose if the ozone depletion is allowed to continue claim damages, and the alleged profitability of Freon refrigerant is sharply reversed. I’d better make it clear I’m not offering such a lawsuit as a practical course. The suit brought by victims of the Bhopal gas leak showed that the courts are an unreliable agency for correcting this kind of abuse. What I am saying is just that the dollars can be added up with a view to exposing it as an abuse. If a corporation appropriated my land to build its factory, this would be recognized as theft in our culture, and its profits would not be sacred but could be attached to repay me for my property. If the corporation takes away a people’s livelihood or its air, this should be recognized as a crime in a new higher concept of economic justice.

Try applying these elementary notions to all instances of toxic wastes. Mines, chemical firms, and nuclear plants operate at a profit and pay dividends, without accounting for the great lakes of poison they spread around them. The wastes are costs of the original production, but they aren’t charged to those who profited while failing to account for them as costs. Instead, when citizens demand they be cleaned up, government taxes the citizens to pay for a clean-up operation on which the original polluter makes a profit. My favorite example is Hercules Chemical Company of Jacksonville, Arkansas, which drenched the region in dioxins and then formed an organization called “Jacksonville People With Pride” to collect money from the Environmental Protection Agency in a fraudulent sham clean-up. They got away with it for a couple of years. A more serious case is the granting of major contracts for cleaning up military toxic wastes to companies like Hughes Aircraft and General Dynamics which are listed as major polluters.

Not that I begrudge the chemical and nuclear engineers a job. I positively wish them converted to the clean-up industry, away from the sort of thing some of them have been doing. I’m talking about the balance sheet; whose bank account the dollars show in. I could gloss over the point, as some politely do, but that seems to me like becoming an accomplice to fraud.

A third reservation about the cost-benefit analyses is that some things don’t have dollar values. You may have seen the claim by Ren´ Dumont to have calculated that the present excess of greenhouse gases from modern industrial practices is causing deaths in the tropics, via drought caused by climatic change, at the rate of a million deaths a year. Now he would not claim much precision in his conclusion, and the chain of inference leading to it is rather long, involving subtle and recent atmospheric physics. His attempted calculation is not absurd, however, and its relevance is evident. My point is just that he was right to present his conclusion as he did, and not as a cost-benefit analysis. If the reason for deeming our energy usage destructive is measured inhuman lives, then by all means let us speak not just of dollars but of human lives.

So often we see scenarios of the same form: A way of life is built around some economic activity, and then unsuspected damage comes to light. Why are we so often caught unaware? If you have the impression there’s a pattern here, I think you’re right. Greed and opportunism, to be sure. The successful exploiter of resources can defend himself by the riches and the influence got from the very exploitation. I am trying to call attention to another common thread in many of these cases: complexity.

First point: The science of the initial technology is less complex than the counting of its consequences.

The computation of yield from an ore, or of energy required to raise it and smelt it, is an easier kind of computation than the prediction of the ecological effect of the tailings fifty years later. The interaction of a hundred species at the edge of the desert may determine whether the desert advances into fertile land. Each species can be studied by “clean” science, but their interaction is a “messy” science, ecology.

Messy sciences like geochemistry and sociology tended to be shoved aside in the first centuries of the scientific revolution. Precedence was given to clean sciences because they worked. These days, messy sciences are much studied, perforce; and seeing this, you may get the impression that great advances are being made.

Now it is true that some big models of complex systems are being run on very fast computers. Some of them even work pretty well; for example, predicting the weather a week ahead is a fairly messy problem which seemed thirty years ago to be intractable but is now fairly successfully handled. Don’t confuse this sort of success with understanding. All messy sciences today are poorly understood, some of them much more poorly than meteorology. It’s good that some serious and resourceful people like working on them, because they are so important. But if those people are honest, they can endure studying these areas only by having great talent for getting satisfaction out of partial results. “For small blessings give thanks” might be the motto of the worker in messy sciences. You probably remember that the very valuable projections of “Nuclear Winter,” which were rightly taken into account by policy makers (both the powerful and us ordinary citizens), were one-dimensional. They left out of account most of the known complexities of atmospheric circulation, to say nothing of unknown mechanisms.

We have to keep doing these rough calculations. Ecology will be a messy science for some time to come. Yet I confess to a bit of unprovable optimism. We may not always be as helpless before messy situations as we are now. Looking at the past fuels my optimism. Three hundred years ago, Newtonian mechanics gave philosophers the feeling that the future could be predicted, but only to the extent that the present was known. It seemed the universe would be understood only by grasping at well-determined causes. Yet probability, which came upon the scene at about the same time, increasingly allowed undetermined causes to be part of understanding too. By the 19th century, they explained thermodynamics as neatly as anything in the deterministic realm, and statistical physics is going on to new triumphs today. In the same way, physics of matter first concentrated on pure crystals because they were neat enough that you could get somewhere with them, and gases because they were simple and (with the aid of probability) you could get somewhere with them; yet later, glasses and liquids also became manageable. I venture to hope that we will find new ways of thinking about today’s messy models, as different from the deterministic way and the probabilistic way as they are from each other. Am I referring to ideas of holistic science now being developed by followers of Prigogine? I don’t know; I’m ignorant on the subject; but I don’t think so. I think what I’m hoping for is something not yet clearly in sight.

UNCERTAINTY

Here’s one more weak point to watch for, as important as any of the others: uncertainty. Criticism of science and technology often hinges on risk. The critic declares the risk unacceptable, the defender insists that the critic is impeding progress. A spectacular instance was and still is the guidelines for containing genetically engineered organisms. I’m going to use a less prominent example.

The space probe Galileo was launched by a space shuttle. Aboard Galileo was a small plutonium pile. You may have seen the criticism of this plan by Karl Grossman and others.5 They pointed out that the shuttle launch is not perfectly safe (as sane astronauts like John Glenn knew even before the Challenger exploded), and that if Galileo’s nuclear reactor should be shattered during launch it would spray into the atmosphere a quantity of plutonium sufficient to poison millions of people. The planetologists, almost all of them, stuck by the plan.

The launch took place, and the space-ship Galileo went safely on its way; but the issue is still current. In the first place, more than just Galileo is involved. Ulysses has since been launched and other research space vehicles with reactor power are planned.

In the second place, do you know Galileo’s planned route to Jupiter? It’s not going outward all the way. It’s going step-by-step, picking up a little additional energy in each of a sequence of near encounters with planets. This is an elegant trick. A small object coming in at a planet may brush by either side, getting deflected. Or, of course, it may come in between and crash on the planet. If it brushes by a large planet all alone in space, it will leave with as much energy as it came in with, only its direction will change. On the other hand, in the complicated system of planet and sun, a fly-by can send the small object off with a little more energy. Galileo is to get such boosts at each of its stepping-stones; and two of these boosts are from this planet, Earth. That’s right. This little nuclear ship that Karl Grossman was trying to get us to worry about will come heading just about straight at us (remember these fly-bys have to be pretty near misses if the helpful change in orbit is to take place). Suppose there were a miscalculation? Some miscalculations will just put it onto a course which will spoil its mission, but some miscalculations will make it become a meteorite. Actually the issue is not primarily miscalculation but loss of control. The steering can’t be corrected if communication with space scientists on Earth is lost, and we know that can happen because it did happen with the Soviet mission to Phobos this year, and intermittently with the U.S. Venus probe now in orbit.

I was saying this to some friends and they thought I was arguing against the Galileo mission. Not quite so simple. I admit the risk of Galileo crashing into Earth is tiny indeed; and maybe future nuclear-powered space-ships will be a little safer and be launched by a safer booster. I’m advancing this example merely as something meriting more thought. It is a good instance of difficult risk evaluation.

Some important scientific experiments interplanetary probes, genetic engineering entail small risks of significant damage. How much do we have to want to know something in order to take such risks?

As an old science-fiction hand, I’m a little on the defensive here. The American science-fiction author Ben Bova goes out of his way to enlist us on the side of the nuclear industry: as the space probes’ cheering section, he would have us drown out the eco-freaks who want to ground those noble plutonium reactors. There’s a resonance from one of my favorite Soviet science-fiction writers, who is two people, Arkadii and Boris Strugatskii. One of their characters in the far future discusses non-human extraterrestrial civilizations, the Leonidians who stagnate because they have achieved union with nature, and the Tagorians who seem slow to progress because they insist on knowing all the possible bad consequences of their initiatives. We humans, on the other hand… MOVE. But I really must resist the implication that all good old science-fiction fans should charge ahead with the dubious experiments.

We really do need an analysis of the risks even when they’re small. The basic decision to power Galileo by a nuclear reactor was not the result of such an analysis. The decision was made by the U.S. military, which wants reactors in space because it wants reactors in space. The U.S. military let Ronald Reagan lie on its behalf about what its satellites were going to do up there; it wouldn’t be above lying about this. I certainly don’t entrust my risk analysis to these people, who have been playing their game of Mutual Assured Destruction for almost forty years and would be playing it still if they hadn’t found ways to put the survival of the world at even greater risk from their weapons. Safe enough for the generals does not mean safe enough for responsible people.

But suppose we do get together a trustworthy team to do a serious risk analysis, say with the participation of the Natural Resources Defense Council. What should the analysis consist of? A probability computation? But probability theory is regarded as applicable to situations where many repetitions of a random process are made or could be made; expected values can be computed and interpreted clearly provided the gains and the losses from different outcomes are subject to addition and subtraction. Here we do not have such a case. Here we deal with small but unknown probabilities and unknown (perhaps large) penalties. We are in different conceptual territory, that of risk analysis, decision theory, and statistics with small non-random samples. This area is like the area of messy sciences: many people are working at it these days; its importance has received well-deserved recognition, but I have to tell you that things are not coming clear. Everything about it is controversial.

Even an earth-bound example will do; it’s been with us for years: if your friendly nuclear power plant next door has a chance of one in a million of blowing up within a year in a Chernobyl-like incident, is that sufficient reason in itself for closing it down? One in a billion? If you have trouble answering such a question, this does not prove that mathematical education is foundering and you are a generation of innumerates. Nobody can answer such a question in a clear-cut way. I am not speaking against the study of decision theory. I am reporting that its present status is pretty primitive. Really, if anything I am speaking for studying it. Just don’t hold your breath waiting for definitive answers.

GIVE THE FUTURE A CHANCE

In short, I’m calling for fellow scientists to accept their responsibility for the future. I’m calling for those who aren’t scientists by job description to join in the effort. Science and technology are central to the problems I’ve put forth here, but there’s no limitation on who can help solve them. I’m trying to communicate my feeling that seeking overall solutions solutions that will stick is even harder than the case-by-case solutions we usually think about.

We won’t make great improvements in a few years, perhaps. We may have to rely on theories and approaches not yet developed and on coworkers not yet born. That’s all right. The future has a right to a share of the action. But only some problems can be left for the future. Any species we allow to die off this decade will not regrow a decade later. The minimum we have to insist on is to leave the next generations a world they can live on to give the future a chance.6

NOTES

1 If the motive had been to shorten the war by demonstrating the power of the new weapon, the bomb could have been exploded in an unpopulated area in sight of Tokyo, and this was Teller’s recommendation. Secretary Stimson’s diaries and other sources threw some light on the decision in later years. See P.M.S. Blackett: Fear, War, and the Bomb: Military and Political Consequences of Atomic Energy, Whittlesey House, New York, 1948. Len Giovanetti and Fred Freed: The Decision to Drop the Bomb, Coward-McCann, New York, 1965. For a fascinating analysis and guide to bibliography, see chapter 9 of H. Bruce Franklin: War Stars: The Superweapon and the American Imagination, Oxford University Press, New York, 1988.

2 186 South Street (4th Floor), Boston, MA. 02111.

3 The Institute for Social Inventions, 24 Abercorn Place, London NW89XP.

4 For discussion of existing codes of professional ethics, here are some informative sources: Stephen H. Unger, Controlling Technology, Holt, Rinehart & Winston, New York, 1982. Mark S. Frankel (ed.), Values and Ethics in Organization and Human Systems Development, AAAS, Washington DC, 1987.

5 For example, Karl Grossman and Judith Long, “Plutonium Con,” The Nation, 249 (1989), p. 589.

6 The issues raised by involvement of academic research with the military are surveyed in several articles in Science for the People 20 (1988), no. 1.

Copyright 1990 Chandler Davis.