Home‎ > ‎links‎ > ‎

thenewphysics

The following essay was published in the American Journal of Physics, Vol. 56 (12), December 1988

Published on the Internet for the first time, with the authors permission, April 2001.

For the authors web site:  click here
For the host website:   click here



The new physics-Physical or mathematical science?

Robert L. Oldershaw
12 Emily Lane,  Amherst, MA 01002
(Received 5 August 1987; accepted for publication 6 February 1988)


As creators of the new physics have proposed increasingly abstract descriptions of nature, the "testability" of their theoretical constructs has markedly declined. While the current problems are not different in kind from those characterizing other eras in the evolution of physics, they do appear to be substantially different in degree from past experience. This situation, exacerbated by an incongruously strong confidence in the standard paradigms of particle physics and cosmology, is a cause for concern.

I. SCIENCE'S SINE QUA NON

    The sine qua non of physical science is empirical testing of hypotheses. Without this acid test we would have no way of distinguishing scientific gold from fool's gold and we might come to view ourselves as being quite rich when, in fact, our pockets were mostly full of pretty, but non-negotiable, iron pyrite. In the case of pure mathematics, on the other hand, empirical testing is usually irrelevant because application to the physical world is not the primary intent of pure mathematics. In this article, it is suggested that an undesirable blurring of the distinction between physical science and mathematical abstraction has taken place in the fields of particle physics and cosmology over the past three decades. Specifically, our ability to test theoretical constructs in these branches of physics has begun to decrease at a worrisome rate, and all scientists must share at least a little concern over the implications of this trend. Suppose that in the near future physicists proposed a theory that, at least on paper, provided a theoretical unification of all the known forces and was consistent with all observable phenomena. However, say that it had just one little catch: It was untestable, i.e., definitive predictions by which the theory could be rigorously tested involved energies that would forever exceed the capabilities of particle accelerators, energy levels that the theory itself said could only have existed within 10--25 s of the Big Bang and thereafter nevermore, and the lower energy extrapolations of the theory that could, in principle, be used as tests were ambiguous enough so that the theory was effectively unfalsifiable. Could such a theory be classified as physical science? If we were to retain empirical testing as the hallmark of physical science, then this theory would not be physical science. With all due respect for the complexity, self-consistency, elegance, and power that such a theory might have, it would still be more properly classified as pure mathematics since the "world" it described could never be more than an abstract possibility.
    Fortunately, theoretical physics is unlikely to ever reach the extreme of complete untestability. But, on the other hand, theoretical physicists are currently very enthusiastic about a new paradigm called superstring theory and some are even claiming that it is the "holy grail" of physics. Yet even its staunchest proponents acknowledge1 that the paradigm remains virtually untestable. This somewhat paradoxical state of affairs is not as rare as one might expect in current theoretical physics, as will be shown below. Since this article is aimed at scientists in all fields, the sources of more detailed discussions cited herein will be, as much as possible, limited to accessible treatments that avoid the time-honored tradition of intimidation by means of technical esoterica. The cited references usually point the intrepid reader toward more rigorous presentations.
    It should also be acknowledged from the outset that the degree to which current theories, and more encompassing paradigms, suffer from testability problems is quite variable. There is little doubt that the standard model of particle physics can be defended on the basis of experimental evidence far more easily than is the case with superstring theory, though the relative nature of this statement should be appreciated. The major goal of this article, however, is not to rank currently popular theories according to their level of empirical support. Rather, the main point of this article is to document a more general aspect of recent theoretical physics: a growing tendency to accept, and in some cases ignore, serious testability problems. Therefore, examples of problems that characterize this trend are taken from several different theoretical models, from those that are acknowledged to be very speculative and from those that are widely believed to be empirically well founded.

II. RELATIVITY: ARCHETYPAL PHYSICAL SCIENCE

    First we need to discuss how theories are properly tested and this can be made more interesting by using Einstein's theory of relativity as an example of a paradigm that has set the standard for modern physical science. The relativistic paradigm was born in 1905 as the Special Theory of Relativity (unaccelerated reference frames), was generalized to include accelerated motions in 1916 (General Theory of Relativity), and continues to this day to be a cornerstone of physics. In his highly commendable biography 2 of Einstein entitled Subtle is the Lord..., Abraham Pais referred to relativity as a "theory of principle." By this he meant that the paradigm was founded upon a well-defined guiding principle: that the laws of nature should not depend on arbitrary choices of reference frames. All of the remarkable discoveries produced by relativity: The new understanding of space-time, the reconciliation of the principle of relativity with the laws of electrodynamics, E = mc2, the elegant theory of gravitation, black hole physics, and more, were the result of a consistent application of Einstein's guiding principle. Unfortunately, such principles, and individuals like Einstein, are very rare.
    When a general principle is not available for problem solving in a research field, scientists usually turn to a "model-building" approach, a practical alternative method for constructing theories. In model building, one first sets up the simplest and most plausible model that can describe a set of phenomena. Testing may be used to identify good aspects of the model and things that need changing; the discovery of new phenomena can also result in modifications or additions to the original model. In principle, the model should gradually become an increasingly better representation of the real world but, unfortunately, model building has an Achilles heel. If a "theory of principle" fails a major empirical test, one is forced to scrap the old theory and start again. When a model-building theory fails a similar, test, one can twiddle with the old theory until it "passes" the test. Therein lies the potential for serious trouble. Model building can result in very impressive theories such as quantum mechanics, or in practical, but highly artificial, models such as the Ptolemaic universe. Obviously, effective empirical testing is especially crucial to the integrity of theories produced by model building because we lack the compass of a guiding principle and to a large extent are employing a "make it up as you go" method.
    There are two major types of predictions used to evaluate empirically scientific theories. The first, and most rigorous, will be referred to as a "definitive prediction" and it involves predicting something unexpected (something previous theories would not predict) before it has been observed. Einstein's prediction of the "bending of starlight," tested during the 1919 solar eclipse, is an excellent example. A theory that can repeatedly identify and pass tests of this sort proves that it represents a better approximation of how nature works. The paradigm of relativity made several such predictions: E = mc2, gravitational redshifts, time dilation, etc., that have been tested repeatedly and reconfirmed over the last 80 years. Einstein contended that if Lorentz invariance had been violated, or if the 1919 eclipse experiment had given a negative result, or if other definitive tests had failed, then he would have regarded the whole paradigm as being almost certainly wrong. Such was his scientific integrity and confidence in definitive tests derived from fundamental principles.
    A second type of prediction is actually not a prediction at all, but rather a "retrodiction." For example, the anomalous advance of the perihelion of Mercury had been a tiny thorn in the side of Newtonian gravitation long before general relativity came upon the scene. Einstein found that his theory correctly "predicted," actually retrodicted, the numerical value of the perihelion advance. The explanation of the unexpected result of the Michelson-Morely experiment (constancy of the velocity of light ) in terms of special relativity is another example. Thus retrodictions can demonstrate a new theory's ability to account for previously identified anomalies or the degree to which it conforms to known phenomena. Retrodictions usually represent falsification tests; the theory is probably wrong if it fails the test, but should not necessarily be considered right if it passes the test since it does not involve a definitive prediction. However, if a theory is virtually in final form (i.e., subsequent adjustments would be considered arbitrary) and if it can repeatedly identify and pass retrodictions that go beyond the phenomena that originally led to the founding of the theory, then these retrodictions can be justifiably thought to increase our confidence in a theory.
    But, in the final analysis, only true definitive predictions can justify the promotion of a theory from being viewed as one of many plausible hypotheses to being recognized as the best available approximation of how nature actually works. A theory that cannot generate definitive predictions, or whose definitive predictions are impossible to test, can be regarded as inherently untestable; we will refer to this as "untestability of the first kind." There is another form of untestability that is of interest to the present discussion. A theory that has very many adjustable parameters, or exists in more alternative versions than the number of potential tests (hedging one's bets), or in general is modifiable in an ad hoc manner, is effectively untestable; we will refer to this as "untestability of the second kind."

III. UNTESTABILITY OF THE FIRST KIND

    We are now in a position to discuss the testability of the new physics, which is broadly defined here as the standard paradigm of particle physics,3 the standard paradigm of cosmology and recent variations on these themes: A comprehensive analysis of the testability of these paradigms could fill several books and so only a representative sampling of major testing problems associated with the new physics will be presented here. Proponents of the new physics will no doubt feel that the present discussion does not include positive support for these theories and therefore leaves the reader with too negative an impression. However, countless technical, general, and popular discussions of the new physics have tended to be so dogmatic and optimistic that a small dose of antidote could not hurt and may be very helpful. If the "believers" repeatedly claim the right to present very idealized overviews of the new physics, then they are obligated to grant "skeptics" the chance to present reasoned alternative viewpoints.
    (1) Mentioned earlier was the remarkable example of superstring theory, a variation on the standard paradigm of particle physics in which fundamental particles are treated as one-dimensional strings rather than mathematical points. The community of theoretical physicists is very excited about this theory and some regard it as the ultimate unified paradigm of physics: "It is a miracle; it is the theory of the world."4 However, as attested to by one of the primary spokesmen for the new physics (Steven Weinberg), "there seem to be no decisive tests in sight"1 by which the superstring theory could demonstrate its scientific validity.
    (2) The standard paradigm of particle physics has been unable to retrodict successfully the masses of quarks and leptons or the organization of fundamental particles into regular families. The values of more than 20 parameters that are crucial to the paradigm, such as particle masses, the coupling strengths of the forces, and the magnitudes of CP violations, cannot be uniquely derived and therefore are freely adjustable.4-6
    (3) The standard model is completely dependent upon the existence of the hypothetical, "Higgs boson," yet none of the variants on the grand unification theme, a cornerstone of the new physics, can predict the mass of this crucial particle or how it interacts with other particles.7 One worries that the next new particle to be found with a mass between a few giga-electron-volts and a tera-electron-volt will be christened the Higgs boson by fiat.
    (4) Many theories of the new physics require extra dimensions beyond the four dimensions of space-time with which we are familiar, 5 to 26 dimensions is typical and about 950 dimensions is the latest record. Yet there is no known way to test empirically for the existence of these extra dimensions.8
    (5) The hypothesized unification of the four forces (gravitational, electromagnetic, weak, and strong) is predicted to occur at energies that are now and probably forever inaccessible to empirical testing.3
    (6) The standard cosmological paradigm asserts that the key events in the evolution of the universe took place within 10-25 s after the Big Bang. However, even in principle, we cannot obtain direct information on the state of the universe prior to decoupling at about 1013 s after the Big Bang.9
    (7) The validity of the most widely accepted cosmological model (Big Bang plus inflation) is completely dependent upon the validity of the standard paradigm of particle physics, but the latter, as we have seen, suffers in many ways from untestability of the first kind.9
    (8) Standard cosmological models have never been able to retrodict satisfactorily the existence of galaxies, neither did they predict the recently discovered bulk streaming of large numbers of galaxies, nor did they predict the existence of the enigmatic dark matter that constitutes more than 90% of the mass of the universe.10
    It is sometimes stated11 and more often implied that "physics is nearly finished," that we have just about figured everything out. The above examples of inherent untestability, which concern fundamental aspects of the "nearly finished physics,'' tell a different story.
    Moreover, examples of effective untestability are more numerous and even more worrisome.

IV. UNTESTABILITY OF THE SECOND KIND

    Below is a sampling of some of the more clearcut examples of the effective untestability of the new physics, beginning with particle physics. For an excellent and readable historical review of the development of the new physics, which is quite unique in that it avoids the seamless idealization that typifies recapitulations by the particle physicists themselves, the book Constructing Quarks by Pickering3 is highly recommended.
    (1) When fractionally charged quarks were initially proposed as the fundamental constituents of hadrons, a truly definitive prediction--the existence of particles with unusual charges such as + 2/3 or - 1/3--appeared to be inevitable. After many fruitless searches, theoreticians hit upon a unique way out of the failed prediction: It was proposed that quarks were dynamically confined inside hadrons and so free quarks should not be observed. Necessity is indeed the mother of invention. If negative results can be circumvented by strategies of this sort, can the theory in question be regarded as testable?
    (2) The predicted existence of magnetic monopoles is another example of a definitive prediction by Grand Unified Theories (GUTs), a very important prediction for the whole standard paradigm of particle physics. In spite of many heroic and imaginative attempts over a period of at least 40 years to detect these mythological particles, results have been consistently negative.12 Most distressing is the fact that negative results are often followed by changes in the predicted properties of the magnetic monopoles (e.g., masses or hypothetical spatial distribution) such that the new magnetic monopole predictions are not in conflict with existing observations. If physicists are unwilling to take "no" for an answer to the question of the existence of magnetic monopoles, what does this say about the testability of the standard paradigm?
    (3) For some time, the gauge theories that provide the mathematical foundations of the new physics had the serious problem of predicting infinite values for some physical quantities. This was regarded as physically unacceptable and a technique, called renormalization, was developed to remove the unwanted infinities. The technique worked quite nicely but some physicists, for example, Dirac,13 were deeply concerned about the ad hoc nature of this resolution and regarded renormalization as an arbitrary device whose necessity indicated that something was fundamentally wrong with the theories.
    (4) In the 1970s, measurements of the proton structure function were at variance with initial predictions of the quark model. This problem was "solved" by introducing and manipulating two theoretical devices: a quark-antiquark "sea" inside the proton (in addition to the regular quark constituents) and a new set of adjustable particles (gluons) to mediate the interquark forces.3 Is this acceptable model building or the addition of epicycles?
    (5) Even the strongest supporters of the new physics admit that the introduction of the "Higgs mechanism" was a totally ad hoc solution to major problems of GUTs: Without it, the gauge theory of the electroweak force was nonrenormalizable and disagreed with observations.3 In general, a renormalizable gauge theory with spontaneous symmetry breaking is highly dependent upon such a mechanism.4 According to an eminent theorist who has played a major role in the developement of the new physics, "...the only legitimate reason for introducing the Higgs boson is to make the standard model mathematically consistent,.. . The biggest drawback of the Higgs boson is that so far no evidence of its existence has been found. Instead, a fair amount of indirect evidence already suggests that the elusive particle does not exist. Indeed, modern theoretical physics is constantly filling the vacuum with so many contraptions such as the Higgs boson that it is amazing a person can even see the stars on a clear night!"7 Indeed, and it is this tendency that severely reduces the effective testability of the new physics.
    (6) To prevent inconsistencies between the electroweak and QCD theories, theorists invented a new set of quantum numbers for quarks called "color"; each "flavor" of quark could exhibit one of three colors.5 Like renormalization, this device worked quite well but, again, should we regard it as an ingenious and proper product of model building or as another epicycle?
    (7) The same question applies to the introduction of yet another new quantum number, "charm," and the charmed quark (the GIM mechanism), which were initially introduced on an ad hoc basis to solve the kaon-decay anomaly and to provide symmetry with the four leptons known at that time (later more were found), and for which the supporting data are ambivalent at best.3
    (8) Recently, the standard GUT model yielded the bold prediction that protons were unstable and had a lifetime on the order of 1032 years. This prediction is currently regarded as having been falsified, but what is more worrisome is that there is a seemingly limitless supply of alternative GUTs to take the place of the apparently falsified version. Since GUTs are so adjustable, how can one test a "one model fits all data" paradigm?
    (9) A general prediction of the standard paradigm of particle physics has been that spin should be largely irrelevant in high-energy, large-angle, elastic scattering of hadrons. This prediction has been repeatedly falsified over the last 10 years in a series of proton-proton scattering experiments by Kirsch and his collaborators.14 Reaction of the theorists has been predictable:  initial disbelief in the data and, when the unhappy results would not quietly disappear, subsequent attempts to "save the phenomenon" by introducing new theoretical considerations.
    (10) A recent addition to the new physics repertoire is "technicolor": a hypothetical new strong interaction for hypothetical subcomponents of the hypothetical Higgs boson (see No. 5 above). How many levels of untestable abstraction are we to allow?
    (11) There are at least six versions of superstring theories and currently no way to decide their relative merits.1 Even if the superstring paradigm could overcome its previously mentioned untestability of the first kind, it would still be plagued by untestability of the second kind.
    (12) Coupling of the graviton (hypothetical particle mediating gravitational interactions) to the hypothetical Higgs field would result in an enormous value for the cosmological constant. This clearly violates the observation that the cosmological constant is zero (or exceedingly small). So theorists have proposed that without the Higgs field the cosmological constant would have had a huge negative value, but that with Higgs field coupling this is neatly canceled by the huge positive value to give a net value of about zero, as observed.7 Is this not the sledgehammer approach to physics problems?
    These are just a few examples of the remarkable resiliency of the new physics, a resiliency that leaves it on the verge of untestability. It is amazing how predictive failures (e.g., nondetection of fractionally charged quarks) are cleverly circumvented and then gradually the failure/solution comes to be regarded as an argument in support of the paradigm! The fundamental question is where should one draw the line between acceptable modification of a theory according to traditional model building and artificial modifications that spoil the conceptual elegance of the theory and have no physical justification other than the fact that they circumvent inconsistency between theory and observation? Unfortunately, there is no clear-cut dividing line by which one could make an objective decision. In the case of a theory in the latter stages of ad hoc development (e.g., the Ptolemaic universe), the shear ungainliness of the theoretical contraption begins to raise doubts in the minds of those who are not positively biased by participation in its construction, and gradually those doubts begin to spread throughout the scientific community. Could the new physics represent model building gone awry? Yes, that could be the case; so many new "fundamental" particles, force-carrying particles, and theoretical devices have been tacked on to the original quark model that the new physics is beginning to make the Ptolemaic universe look rather svelte. On the other hand, one must emphasize that the new physics has much more data to explain and therefore this heroic theoretical effort might not represent model building gone awry. Happily, there is an excellent prospect for deciding between these two possibilities and this will be the subject of Sec. V of this article. Briefly, the ability or inability of existing paradigms to predict correctly the form of the enigmatic dark matter that constitutes at least 90% of all matter in the cosmos will be a powerful tool for objectively and scientifically determining: (1) whether the new physics represents a natural or an artificial unification of our knowledge, and (2) if the latter is the case, then whether there are other "dark horse" paradigms that hold more promise.
    Before this crucial test is discussed, however, let us take a brief look at the standard cosmological paradigm in terms of effective testability. The Big Bang theory has been the basic paradigm of cosmology for decades. The combination of the observed redshift-distance relation for galaxies and the isotropic microwave background radiation is strong evidence in support of the idea that 10-20 billion years ago the observed portion of the cosmos was in a much more compact state and that it has been expanding ever since then. According to the Big Bang model, the initial state was a singularity: All of the atoms, stars, and galaxies we now observe were compressed into an entity with infinite density, temperature, and pressure, and with a diameter equal to zero (in fact, space and time did not yet exist). For unknowable reasons (perhaps boredom), the singularity began to expand. Einstein, who developed the theory upon which the Big Bang model was based, general relativity, was very critical of the idea of an initial universal singularity because he felt that general relativity was being pushed beyond the limits of its applicability.2 He felt that one could turn the mathematical crank until values like zeros and infinities resulted but that this was not realistic. Aside from the problem of an initial singularity,9 the original Big Bang model had other major technical problems 9,15 such as the flatness problem, the horizon problem, and the smoothness problem. It also could not explain the existence of galaxies. It predicted a homogeneous distribution of matter on large scales whereas inhomogeneity in the distribution of matter has always been observed on the largest scales that could be adequately tested.16,17 It did not predict that the universe would be largely composed of an unknown form of matter referred to as the dark matter. It predicted uniform expansion of the galactic environment whereas observations reveal large deviations from a uniform Hubble flow.18 And there are many less dramatic predictive problems that have beset the original Big Bang theory, such as those associated with predicted elemental abundances19 and the predicted age of the universe.20
    In order to rescue the Big Bang theory, particle physicists began to suggest ways in which the new physics might be brought to bear on cosmological problems. Although astrophysicists were initially somewhat demure about this new relationship, before long the majority of the physics community was proclaiming that the "marriage" of cosmology and particle physics heralded the birth of a complete understanding of nature. A much smaller group of scientists worried that it looked more like an incestuous affair with a high probability for yielding unsound progeny. Recently, a noted astrophysicist candidly characterized the interactions of astrophysics and particle physics as follows:

"[T]he big news so far is that particle physicists seem to be able to provide initial conditions for cosmology that meet what astronomers generally think they want without undue forcing of the particle physicist's theory. Indeed I sometimes have the feeling of taking part in a vaudeville skit: "You want a tuck in the waist? We'll take a tuck. You want massive weakly interacting particles? We have a full rack. You want an effective potential for inflation with a shallow slope? We have several possibilities." This is a lot of activity to be fed by the thin gruel of theory and negative observational results, with no prediction and experimental verification of the sort that, according to the usual rules of evidence in physics, would lead us to think we are on the right track..."21
    Let us consider several examples of applications of the new physics to the realm of astrophysics and evaluate the testability of the results. Using GUTs as a theoretical toolbox, physicists fixed up the old Big Bang theory by the addition of a period of rapid expansion (inflation), which simultaneously removed the flatness, horizon, and smoothness problems.9 But, of course, the testability of the inflationary hypothesis is entirely dependent upon the testability of GUTs and we have already seen that there are major problems with the latter. Moreover, the inflated Big Bang model may have a serious problem with the age it predicts for the universe,20 and the one rigorous prediction that it does make: that W = 1 (which means that the density of the universe is equal to the "critical" value) has been repeatedly contradicted.9 A second example concerns the fact that the old Big Bang theory, when analyzed in terms of the new physics, appeared to generate too many magnetic monopoles; remember that no magnetic monopoles have ever been observed. Inflation deftly "solves" this problem by inflating space to such an extent that the resulting magnetic monopole density can be made arbitrarily low,15 in fact, low enough so that we could not have expected to detect a magnetic monopole even after 40 years of trying. Obviously, if the density of magnetic monopoles is now to be a freely adjustable parameter, via the inflation scenario, then the effective testability of theoretical constructs predicting the existence of magnetic monopoles is significantly reduced. One is reminded of that vaudeville act: "You got negative results? Not to worry. We just pump up the old cosmos a little. Oops, not too much! And, presto, your negative result is a vindicated expectation." A third example concerns the recent discovery that the large-scale structure of the cosmos is organized like "soap suds," with galaxies residing primarily in the interstices of huge bubblelike voids.  The new physics has offered at least three different interpretations for this phenomenon, all  post facto, of course. The cosmic suds are attributed to (1) superconducting and furiously vibrating cosmic strings,22 or (2) biased galaxy formation in a WIMP-dominated universe10 (WIMP stands for a hypothetical class of weakly interacting massive particles), or (3) hard as it may be to believe, double inflation23 (if one inflation doesn't solve all the problems, how about two inflationary episodes?). All of these theoretical interpretations are so unconstrained that they are effectively untestable and their ad hoc status is embarrassingly obvious. Many other examples of new physics applications in the astrophysical realm just leave one's mouth hanging open: Quasars are cosmic strings,22 quasars are axion lumps,24 the solar neutrino problem is due to axions or other WIMPs in the Sun,25 neutron stars are really big quark nuggets,26 the difference between spiral and elliptical galaxies is whether they were "seeded" with self-intersecting or nonself-intersecting "loops of string," 27 the dark matter is WIMPs or "shadow matter"16 (the latter idea may have been cribbed from Lewis Carroll),... .
    In general, when the proponents of the new physics apply their art to cosmological problems they usually invoke essentially untestable physics that is supposed to have taken place in the unobservable past in order to explain current observations. Are they solving the problems or hiding them behind a curtain of sophistry? Without adequate testability, how are we to decide this question?

V. OF DARKNESS AND LIGHT

    To add a little optimism to the dose of pessimism contained herein, the following reason for unqualified hope is emphasized. At least 90% of the matter of which the cosmos appears to be composed is in a form that can be detected indirectly (gravitationally) but is apparently too dark to see, hence the name dark matter. Until quite recently, what we had been calling the universe was only the tip of the proverbial iceberg. The discovery of the dark matter is clearly one of the most important empirical discoveries of our era, and any paradigm that purports to have anything general and fundamental to say about nature must convincingly retrodict the dark matter and definitively predict the properties of its constituents. Almost any paradigm can be manipulated to retrodict the dark matter, but correctly predicting the exact properties is a different matter. There is good reason to believe that, through a combination of observations that gradually narrow down the plausible dark matter candidates and observations that offer positive support for particular candidates, we will be able to identify the constituents of the dark matter in the not-too-distant future. The certainty of the identification will, of course, be related to the nature of the dark matter constituents. An unknown population of very low mass stars would probably be easier to detect or rule out than a class of exotic particles such as axions or photinos. Yet, if this crucial search is pursued vigorously and objectively, then we can probably have an answer to the dark matter enigma in less than 10 years, perhaps considerably less. And what a powerful acid test it could be for theoretical physics; a clear-cut answer would represent either a major vindication of the path that we are on or an unmistakable sign that a new path must be broken. Given the importance of answering the dark matter question, research aimed at this question should obviously be given the highest priority.

VI. CONCLUSIONS AND SUGGESTIONS

    As has been discussed above, the new physics and its applications to cosmology are hampered by serious testability problems of the first (inherent) and second (effective) kinds. Perhaps no one is better qualified to comment on this problem than Weinberg:

"...I feel a sense of tremendous frustration. We've been working on these ideas for more than a decade, since the mid-1970s, and we have almost nothing to show for it in terms of hard agreement between predictions and experiments. Only perhaps the prediction of sin2 0 stands up as robust and verified." 6
    Remembering the standards exemplified by Einstein's relativity paradigm (i.e., a paradigm must make unique testable predictions and, if these predictions are contradicted by experiment, then one must pay more than lip service to the idea that the paradigm may be wrong), one must wonder about the recent trend toward an ever-increasing degree of untestability in physics. If the empirical foundation of the new physics is so insecure, and if it is still an axiom of science that without an empirical foundation a paradigm is dangerously adrift in a sea of abstraction, then why is there an almost unquestioned faith in the new physics? How can we understand the remarkable optimism and credulity demonstrated by theorists, experimentalists, peer reviewers, editors, and science popularizers? Perhaps in the end the current overconfidence in the new physics will be vindicated, but at present it is a cause for concern because it simply is not scientifically justified. It seems hard to avoid the conclusion that the new physics and its applications to cosmology have begun to transcend the limits of physical science, as traditionally defined. If this trend were to continue, then we would have to consider classification of the resulting theoretical constructs as mathematical science rather than physical science. While one might admire the breadth, complexity, and/or interconnectedness of the intellectual tapestries they weave, the theories would have to be judged as purely abstract propositions rather than as models of how nature actually works. Fortunately, it is unlikely that this trend will be allowed to continue unchecked because science has self-correcting mechanisms that respond slowly but surely to this sort of problem. Scientists who are not directly part of the particle physics effort, and some who are, have already begun to argue that the new physics must do more to prove its worth.
    Rather than merely criticizing the new physics, the author is in a position to suggest an alternative path toward a more unified physics, and it is one that leads to definitive predictions.28 However, the point of this essay is not to lobby for one particular paradigm over any other. The intended purpose is to demonstrate that there are potentially serious testability problems facing modern theoretical physics and to consider what might be done to reverse this trend. The following are some general suggestions aimed at combating the problems identified above; the suggestions are somewhat idealistic, but science is a struggle toward ideals.
    (1) In general, theoretical physicists must rededicate themselves to the difficult combination of rigorous open-mindedness and constructive skepticism, a combination upon which the evolutionary growth of science depends. We cannot afford to ignore new ideas or to allow any set of ideas to gain a monopoly over scientific thought.
    (2) While speculation is a crucial component of scientific progress, speculative theories must be unambiguously identified and regarded as such until sufficient and successful contact is made with the empirical world. When this is not the case, pure speculations and untested assumptions, if they are continually repeated in the most prestigious forums of science, gradually come to be regarded as probable facts of nature. The brief history of modern science contains many examples of this pernicious problem and we constantly must be on guard against its return.
    (3) Finally, theoretical physics must pay more careful attention to Einstein's dictum that science begins and ends with experience, i.e., empirical data. If hypotheses are to be regarded as scientific, then they must be able to pass multiple retrodictive tests, and, more importantly, they must generate definitive predictions by which they can be unambiguously tested. The dark matter enigma would be a good place to start; theories that cannot pass this most fundamental test cannot be taken seriously. What we need are rigorously falsifiable predictions before the identity of the dark matter constituents is known. It is time to put our gold to the acid test to see whether or not it dissolves. And if it does dissolve, then for science's sake let's not declare it a "phase transition to liquid gold."

1R. Walgate, Nature 322, 592 (1986).
2A. Pais, 'Subtle is the Lord...' (Oxford U. P., Oxford, 1982).
3A. Pickering, Constructing Quarks (University Of Chicago Press, Chicago, 1984).
4K. C. Wali, Prog. Theor. Phys. Suppl. 86, 387 (1986).
5C. Quigg, Sci. Am. 252(4), 84 (1985).
6S. Weinberg, Phys. Today 40(1), 7 (1987).
7M. J. G. Veltman, Sci. Am. 255(5), 76 (1986).
8M. M. Waldrop, Science 229, 1251 (1985).
9T. Rothman and G. Ellis, Astron. J. 15, 6 (1987).
10M. M. Waldrop, Science 233, 1386 (1986).
11S. W. Hawking, Is the End in Sight for Theoretical Physics? (Cambridge U. P., Cambridge, 1980).
12D. E. Groom, Phys. Rep. 140, 323 (1986).
13P. A. M..Dirac, Directions in Physics (Wiley, New York, 1979).
14B. M. Schwarzschild, Phys. Today 38(8), 17 (1985).
15A. H. Guth and P. J. Steinhardt; Sci. Am. 250(5), 116 (1984).
16J. O. Burns, Sci. Am. 255(1), 38 (1986).
17D. J. Batuski and J. O. Burns, Astrophys. J. 299, 5 (1985).
18M. M. Waldrop, Science 232, 26 (1986)
19A. Vidal-Madjar and C. Gry, Astron. Astrophys. 138, 285 (1984).
20R. J. Tayler, Q. J. R. Astron. Soc. 27, 367 (1986).
21P. J. E. Peebles, Science 235, 372 (1987).
22M. M. Waldrop, Science 235, 283 (1987).
23J. Silk and M. S. Turner, Phys. Rev. D 35, 419 (1987).
24T. W. Kephart and T. J. Weiler, Phys. Rev. Lett. 58, 171 (1987).
25M. M. Waldrop, Science 229, 955 (1985).
26C. Alcock, E. Farhi, and A. Olinto, Astrophys. J. 310, 261 (1986).
27W. H. Zurek, Phys. Rev. Lett. 57 2326 (1986).
28R. L. Oldershaw, Astrophys. J. 322, 34 (1987).