2011 : WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?
Post date: May 16, 2011 12:34:1 PM
Eugene Higgins Professor of Psychology...
"Nothing In Life Is As Important As You Think It Is, While You Are Thinking About It"
Education is an important determinant of income — one of the most important — but it is less important than most people think. If everyone had the same education, the inequality of income would be reduced by less than 10%. When you focus on education you neglect the myriad other factors that determine income. The differences of income among people who have the same education are huge.
Income is an important determinant of people's satisfaction with their lives, but it is far less important than most people think. If everyone had the same income, the differences among people in life satisfaction would be reduced by less than 5%.
Income is even less important as a determinant of emotional happiness. Winning the lottery is a happy event, but the elation does not last. On average, individuals with high income are in a better mood than people with lower income, but the difference is about 1/3 as large as most people expect. When you think of rich and poor people, your thoughts are inevitably focused on circumstances in which their income is important. But happiness depends on other factors more than it depends on income.
Paraplegics are often unhappy, but they are not unhappy all the time because they spend most of the time experiencing and thinking about other things than their disability. When we think of what it is like to be a paraplegic, or blind, or a lottery winner, or a resident of California we focus on the distinctive aspects of each of these conditions. The mismatch in the allocation of attention between thinking about a life condition and actually living it is the cause of the focusing illusion.
Marketers exploit the focusing illusion. When people are induced to believe that they "must have" a good, they greatly exaggerate the difference that the good will make to the quality of their life. The focusing illusion is greater for some goods than for others, depending on the extent to which the goods attract continued attention over time. The focusing illusion is likely to be more significant for leather car seats than for books on tape.
Politicians are almost as good as marketers in causing people to exaggerate the importance of issues on which their attention is focused. People can be made to believe that school uniforms will significantly improve educational outcomes, or that health care reform will hugely change the quality of life in the United States — either for the better or for the worse. Health care reform will make a difference, but the difference will be smaller than it appears when you focus on it.
The Double-Blind Control Experiment
Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science,......
Not all concepts wielded by professional scientists would improve everybody's cognitive toolkit. We are here not looking for tools with which research scientists might benefit their science. We are looking for tools to help non-scientists understand science better, and equip them to make better judgments throughout their lives.
Why do half of all Americans believe in ghosts, three quarters believe in angels, a third believe in astrology, three quarters believe in Hell? Why do a quarter of all Americans and believe that the President of the United States was born outside the country and is therefore ineligible to be President? Why do more than 40 percent of Americans think the universe began after the domestication of the dog?
Let's not give the defeatist answer and blame it all on stupidity. That's probably part of the story, but let's be optimistic and concentrate on something remediable: lack of training in how to think critically, and how to discount personal opinion, prejudice and anecdote, in favour of evidence. I believe that the double-blind control experiment does double duty. It is more than just an excellent research tool. It also has educational, didactic value in teaching people how to think critically. My thesis is that you needn't actually do double-blind control experiments in order to experience an improvement in your cognitive toolkit. You only need to understand the principle, grasp why it is necessary, and revel in its elegance.
If all schools taught their pupils how to do a double-blind control experiment, our cognitive toolkits would be improved in the following ways:
1. We would learn not to generalise from anecdotes.
2. We would learn how to assess the likelihood that an apparently important effect might have happened by chance alone.
3. We would learn how extremely difficult it is to eliminate subjective bias, and that subjective bias does not imply dishonesty or venality of any kind. This lesson goes deeper. It has the salutary effect of undermining respect for authority, and respect for personal opinion.
4. We would learn not to be seduced by homeopaths and other quacks and charlatans, who would consequently be put out of business.
5. We would learn critical and sceptical habits of thought more generally, which not only would improve our cognitive toolkit but might save the world.
Chunks With Handles
Director of the Center for Brain and Cognition and professor with the Psychology......
Do you need language — including words — for sophisticated thinking or do they merely facilitate thought? This question goes back to a debate between two Victorian scientists Max Mueller and Francis Galton.
A word that has made it into the common vocabulary of both science and pop culture is "paradigm" (and the converse "anomaly") having been introduced by the historian of science Thomas Kuhn. It is now widely used and misused both in Science and in other disciplines almost to the point where the original meaning is starting to be diluted. (This often happens to "memes" of human language and culture; which don't enjoy the lawful, particulate transmission of genes.) The word "paradigm" is now often used inappropriately — especially in the US — to mean any experimental procedure such as "The Stroop paradigm" or " A reaction time paradigm" or "fMR paradigm".
However, its appropriate use has shaped our culture in significant ways; even influencing the way scientists work and think. A more prevalent associated word is "skepticism", originating from the name of a Greek school of philosophy . This is used even more frequently and loosely than "anomaly" and "paradigm shift".
One can speak of reigning paradigms; what Kuhn calls normal science — What I cynically refer to as a "mutual admiration club trapped in a cul-de-sac of specialization". The club usually has its Pope(s), hierarchical priesthood, acolytes and a set of guiding assumptions and accepted norms that are zealously guarded almost with religious fervor. (They also fund each other and review each other’s papers and grants and give each other awards.)
This isn't entirely useless; its called "normal science" that grows by progressive accretion, employing the bricklayers rather than architects of science. If a new experimental observation (e.g. bacterial transformation; Ulcers cured by antibiotics) threatens to topple the edifice, its called an anomaly and the typical reaction of those who practice normal science is to ignore it or brush it under the carpet — a form of psychological denial surprisingly common among my colleagues.
This is not an unhealthy reaction since most anomalies turn out to be false alarms; the baseline probability of their survival as real" anomalies is small and whole careers have been wasted pursuing them (think "poly water", cold fusion".) Yet even such false anomalies serve the useful purpose of jolting scientists from their slumber by calling into question the basic axioms that drive their particular area of science. Conformist science feels cozy given the gregarious nature of humans and anomalies force periodic reality checks even if the anomaly turns out to be flawed.
More important, though, are genuine anomalies that emerge every now and then, legitimately challenging the status quo, forcing paradigm shifts and leading to scientific revolutions. Conversely, premature skepticism toward anomalies can lead to stagnation of science. One needs to be skeptical of anomalies but equally skeptical of the status quo if science is to progress.
I see an analogy between the process of science and of evolution by natural selection. For evolution, too, is characterized by periods of stasis (= normal science) punctuated by brief periods of accelerated change (= paradigm shifts) based on mutations (= anomalies) most of which are lethal (false theories) but some lead to the budding off of new species and phylogenetic trends (=paradigm shifts).
Since most anomalies are false alarms (spoon bending, telepathy, homeopathy) one can waste a lifetime pursuing them. So how does one decide which anomalies to invest in? Obviously one can do so by trial and error but that can be tedious and time consuming.
Let's take four well-known examples: (1) Continental drift; (2) Bacterial transformation; (3) cold fusion; (4) telepathy. All of these were anomalies when first discovered because they didn't fit the big picture of normal science at that time. The evidence that all the continents broke off and drifted away from a giant super-continent was staring at peoples faces — as Wagener noted in the early 20th century. (The coastlines coincided almost perfectly; certain fossils found on the east coast of Brazil were exactly the same as the ones on the west coast of Africa etc.) Yet it took fifty years for he idea to be accepted by the skeptics.
The second anomaly (2) — observed a decade before DNA and the genetic code — was that if you incubate one species of bacterium (pneumococcus A) with another species in a test tube (Pneumococcus B) then bacterium A becomes transformed into B! (Even the DNA —rich juice from B will suffice — leading Avery to suspect that heredity might have a chemical basis) Others replicated this. It was almost like saying put a pig and donkey into a room and two pigs emerge — yet the discovery was largely ignored for a dozen years. Until Watson and Crick pointed out the mechanism of transformation. The third anomaly — telepathy — is almost certainly a false alarm.
You will see a general rule of thumb emerging here. Anomalies (1) and (2) were not ignored because of lack of empirical evidence. Even a school child can see the fit between continental coastlines or similarity of fossils. It was ignored solely because it didn't fit the big picture — the notion of terra firma or a solid, immovable earth — and there was no conceivable mechanism that would allow continents to drift (until plate tectonics was discovered). Likewise (2) was repeatedly confirmed but ignored because it challenged the fundamental doctrine of biology — the stability of species. But notice that the third (telepathy) was rejected for two reasons; first, it didn't fit the big picture and second because it was hard to replicate
This gives us the recipe we are looking for; focus on anomalies that have survived repeated attempts to disprove experimentally, but are ignored by the establishment solely because you cant think of a mechanism. But don't waste time ones that have not been empirically confirmed despite repeated attempts (or the effect becomes smaller with each attempt — a red flag!)
"Paradigm" and "paradigm shift" have now migrated from science into pop culture (not always with good results) and I suspect many other words and phrases will follow suit — thereby enriching our intellectual and conceptual vocabulary and day-to-day thinking.
Indeed, words themselves are paradigms or stable "species" of sorts that evolve gradually with progressively accumulating penumbras of meaning, or sometimes mutate into new words to denote new concepts. These can then consolidate into chunks with "handles" (names) for juggling ideas around generating novel combinations. As a behavioral neurologist I am tempted to suggest that such crystallization of words and juggling them is unique to humans and it occurs in brain areas in and near the left TPO (temporal-parietal-occipital junction). But that's pure speculation.
Father of behavioral economics...
I recently posted a question in this space asking people to name their favorite example of a wrong scientific belief. One of my favorite answers came from Clay Shirky. Here is an excerpt:
The existence of ether, the medium through which light (was thought to) travel. It was believed to be true by analogy — waves propagate through water, and sound waves propagate through air, so light must propagate through X, and the name of this particular X was ether.
It's also my favorite because it illustrates how hard it is to accumulate evidence for deciding something doesn't exist. Ether was both required by 19th century theories and undetectable by 19th century apparatus, so it accumulated a raft of negative characteristics: it was odorless, colorless, inert, and so on.
Several other entries (such as the "force of gravity") shared the primary function of ether: they were convenient fictions that were able to "explain" some otherwise ornery facts. Consider this quote from Max Pettenkofer, the German chemist and physician, is disputing the role of bacteria as a cause of the cholera. "Germs are of no account in cholera! The important thing is the disposition of the individual."
So in answer to the current question I am proposing that we now change the usage of the word Aether, using the old spelling, since there is no need for a term that refers to something that does not exist. Instead, I suggest we use that term to describe the role of any free parameter used in a similar way: that is, Aether is the thing that makes my theory work. Replace the word disposition with Aether in Pettenkofer's sentence above to see how it works.
Often Aetherists (theorists who rely on an Aether variable) think that their use of the Aether concept renders the theory untestable. This belief is often justified during their lifetimes, but then along comes clever empiricists such as Michelson and Morley and last year's tautology become this year's example of a wrong theory.
Aether variables are extremely common in my own field of economics. Utility is the thing you must be maximizing in order to render your choice rational.
Both risk and risk aversion are concepts that were once well defined, but are now in danger of becoming Aetherized. Stocks that earn surprisingly high returns are labeled as risky, because in the theory, excess returns must be accompanied by higher risk. If, inconveniently, the traditional measures of risk such as variance or covariance with the market are not high, then the Aetherists tell us there must be some other risk; we just don't know what it is.
Similarly, traditionally the concept of risk aversion was taken to be a primitive; each person had a parameter, gamma, that measured her degree of risk aversion. Now risk aversion is allowed to be time varying, and Aetherists can say with a straight face that the market crashes of 2001 and 2008 were caused by sudden increases in risk aversion. (Note the direction of the causation. Stocks fell because risk aversion spiked, not vice versa.)
So, the next time you are confronted with such a theory, I suggest substituting the word Aether for the offending concept. Personally, I am planning to refer to the time-varying variety of risk aversion as Aether aversion.
Artist; Composer; Recording Producer: U2, Cold Play, Talking Heads, Paul Simon;......
That idea, or bundle of ideas, seems to me the most important revolution in general thinking in the last 150 years. It has given us a whole new sense of who we are, where we fit, and how things work. It has made commonplace and intuitive a type of perception that used to be the province of mystics — the sense of wholeness and interconnectedness.
Beginning with Copernicus, our picture of a semi-divine humankind perfectly located at the centre of The Universe began to falter: we discovered that we live on a small planet circling a medium sized star at the edge of an average galaxy. And then, following Darwin, we stopped being able to locate ourselves at the centre of life. Darwin gave us a matrix upon which we could locate life in all its forms: and the shocking news was that we weren't at the centre of that either — just another species in the innumerable panoply of species, inseparably woven into the whole fabric (and not an indispensable part of it either). We have been cut down to size, but at the same time we have discovered ourselves to be part of the most unimaginably vast and beautiful drama called Life.
Before ''ecology'' we understood the world in the metaphor of a pyramid — a heirarchy with God at the top, Man a close second and, sharply separated, a vast mass of life and matter beneath. In that model, information and intelligence flowed in one direction only — from the intelligent top to the ''base'' bottom — and, as masters of the universe, we felt no misgivings exploiting the lower reaches of the pyramid.
The ecological vision has changed that: we now increasingly view life as a profoundly complex weblike system, with information running in all directions, and instead of a single heirarchy we see an infinity of nested-together and co-dependent heirarchies — and the complexity of all this is such to be in and of itself creative. We no longer need the idea of a superior intelligence outside of the system — the dense field of intersecting intelligences is fertile enough to account for all the incredible beauty of ''creation''.
The ''ecological'' view isn't confined to the organic world. Along with it comes a new understanding of how intelligence itself comes into being. The classical picture saw Great Men with Great Ideas...but now we tend to think more in terms of fertile circumstances where uncountable numbers of minds contribute to a river of innovation. It doesn't mean we cease to admire the most conspicuous of these — but that we understand them as effects as much as causes. This has ramifications for the way we think about societal design, about crime and conflict, education, culture and science.
That in turn leads to a re-evaluation of the various actors in the human drama. When we realise that the cleaners and the bus drivers and the primary school teachers are as much a part of the story as the professors and the celebrities, we will start to accord them the respect they deserve.
We Are Not Alone In The Universe
Leading scientists of the 21st century...
I cannot imagine any single discovery that would have more impact on humanity than the discovery of life outside of our solar system. There is a human-centric, Earth-centric view of life that permeates most cultural and societal thinking. Finding that there are multiple, perhaps millions of origins of life and that life is ubiquitous throughout the universe will profoundly affect every human.
We live on a microbial planet. There are one million microbial cells per cubic centimeter of water in our oceans, lakes and rivers; deep within the Earth's crust and throughout our atmosphere. We have more than 100 trillion microbes on and in each of us. The Earth's diversity of life would have seemed like science fiction to our ancestors. We have microbes that can withstand millions of Rads of ionizing radiation; such strong acid or base that it would dissolve our skin; microbes that grow in ice and microbes that grow and thrive at temperatures exceeding 100 degrees C. We have life that lives on carbon dioxide, on methane, on sulfur, or on sugar. We have sent trillions of bacteria into space over the last few billion years and we have exchanged material with Mars on a constant basis, so it would be very surprising if we do not find evidence of microbial life in our solar system, particularly on Mars.
The recent discoveries by Dimitar Sasselov and colleagues of numerous Earth and super-Earth-like planets outside our solar system, including water worlds, greatly increases the probability of finding life. Sasselov estimates approximately 100,000 Earth and super-Earths within our own galaxy. The universe is young so wherever we find microbial life there will be intelligent life in the future.
Expanding our scientific reach further into the skies will change us forever.
Deep Time And The Far Future
President, The Royal Society; Professor of Cosmology & Astrophysics; Master,......
We need to extend our time-horizons. Especially, we need deeper and wider awareness that far more time lies ahead than has elapsed up till now.
Our present biosphere is the outcome of more than four billion years of evolution; and we can trace cosmic history right back to a "big bang" that happened about 13.7 billion years ago. The stupendous time-spans of the evolutionary past are now part of common culture and understanding — even though the concept may not yet have percolated all parts of Kansas, and Alaska.
But the immense time-horizons that stretch ahead — though familiar to every astronomer — haven't permeated our culture to the same extent. Our Sun is less than half way through its life. It formed 4.5 billion years ago, but it's got 6 billion more before the fuel runs out. It will then flare up, engulfing the inner planets and vaporising any life that might then remain on Earth. But even after the Sun's demise, the expanding universe will continue — perhaps for ever — destined to become ever colder, ever emptier. That, at least, is the best long range forecast that cosmologists can offer, though few would lay firm odds on what may happen beyond a few tens of billions of years.
Awareness of the "deep time" lying ahead is still not pervasive. Indeed, most people — and not only those for whom this view is enshrined in religious beliefs —envisage humans as in some sense the culmination of evolution. But no astronomer could believe this; on the contrary, it would be equally plausible to surmise that we are not even at the halfway stage. There is abundant time for posthuman evolution, here on Earth or far beyond, organic or inorganic, to give rise to far more diversity, and even greater qualitative changes, than those that have led from single-celled organisms to humans. Indeed this conclusion is strengthened when we realise that future evolution will proceed not on the million-year timescale characteristic of Darwinian selection, but at the much accelerated rate allowed by genetic modification and the advance of machine intelligence (and forced by the drastic environmental pressures that would confront any humans who were to construct habitats beyond the Earth.
Darwin himself realised that "No living species will preserve its unaltered likeness into a distant futurity". We now know that "futurity" extends far further, and alterations can occur far faster — than Darwin envisioned. And we know that the cosmos, through which life could spread, is far more extensive and varied than he envisaged. So humans are surely not the terminal branch of an evolutionary tree, but a species that emerged early in cosmic history, with special promise for diverse evolution. But this is not to diminish their status. We humans are entitled to feel uniquely important as the first known species with the power to mould its evolutionary legacy.
A Solution for Collapsed Thinking: Signal Detection Theory
Richard Clarke Cabot Professor of Social Ethics, Department of Psychology, Harvard......
Hit: A signal is present and the signal is detected (correct response)
False Alarm: No signal is presented but a signal is detected (incorrect response)
Miss: A signal is present but no signal is detected (incorrect response)
Correct Rejection: No signal is present and no signal is detected (correct response)
If the signal is clear, like a bright light against a dark background, the decision maker has good visual acuity and is motivated to watch for the signal, we should see a large number of Hits and Correct Rejections and very few False Alarms and Misses. As these properties change, so does the quality of the decision. Whether the stimulus is a physical one like a light or sound, or a piece of information requiring an assessment about its truth, information is almost always deviates from goodness.
It is under such ordinary conditions of uncertainty that signal detection theory yields a powerful way to assess the stimulus and respondent qualities including the respondent's idiosyncratic criterion (or cutting score, "c") for decision-making. The criterion is the place along the distribution at which point the respondent switches from the saying "no" to a "yes".
The applications of signal detection theory have been in areas as diverse as locating objects by sonar, the quality of remembering, the comprehension of language, visual perception, consumer marketing, jury decisions, price predictions in financial markets, and medical diagnoses.
The reason signal detection theory should be in the toolkit of every scientist is because it provides a mathematically rigorous framework to understand the nature of decision processes. The reason its logic should be in the toolkit of every thinking person is because it forces a completion of the four cells when analyzing the quality of any statement such as "Good management positions await Saggitarius this week".
We perceive the world through our senses. The brain-mediated data we receive in this way form the basis of our understanding of the world. From this become possible the ordinary and exceptional mental activities of attending, perceiving, remembering, feeling, and reasoning. Via these mental processes we understand and act on the material and social world.
In the town of Pondicherry in South India, where I sit as I write this, many do not share this assessment. There are those, including some close to me, who believe there are extrasensory paths to knowing the world that transcend the five senses, that untested "natural" foods and methods of acquiring information are superior to those based in evidence. On this trip, for example, I learned that they believe that a man has been able to stay alive without and caloric intake for months (although his weight falls, but only when he is under scientific observation).
Pondicherry is an Indian Union Territory that was controlled by the French for 300 years (staving off the British in many a battle right outside my window) and held on to until a few years after Indian independence. It has, in addition to numerous other points of attraction, become a center for those who yearn for spiritual experience, attracting many (both whites and natives) to give up their worldly lives to pursue the advancement of the spirit, to undertake bodily healing, and to invest in good works on behalf of a larger community.
Yesterday, I met a brilliant young man who had worked as a lawyer for eight years who now lives in the ashram and works in their book sales division. Sure, you retort, the profession of the law would turn any good person toward spirituality but I assure you that that the folks here have given up wealth and professional life of a wide variety of sorts to pursue this manner of life. The point is that seemingly intelligent people seem to crave non-rational modes of thinking and the Edge question this years forced me to think not only about the toolkit of the scientist but every person.
I do not mean to pick on any one city, and certainly not this unusual one in which so much good effort is put towards the arts and culture and on social upliftment of the sort we would admire. But this is a town that also attracts a particular type of European, American, and Indian — those whose minds seem more naturally prepared to believe that unprocessed "natural" herbs do cure cancer and that standard medical care is to be avoided (until one desperately needs chemo), that Tuesdays are inauspicious for starting new projects, that particular points in the big toe control the digestive system, that the position of the stars at the time of their birth led them to Pondicherry through an inexplicable process emanating from a higher authority and through a vision from "the mother", a deceased French woman, who dominates the ashram and surrounding area in death more than many successful politicians ever do in their entire lives.
These types of beliefs may seem extreme but they are not considered as such in most of the world. Change the content and the underlying false manner of thinking is readily observed just about anywhere — the new 22 inches of snow that has fallen where I live in the United States while I'm away will no doubt bring forth beliefs of a god angered by crazy scientists toting global warming.
As I contemplate the single most powerful tool that could be put into the heads of every growing child and every adult seeking a rational path, scientists included, it is the simple and powerful concept of "signal detection". In fact, the Edge question this year happens to be one I've contemplated for a while — should anybody ever ask such a question, the answer I've known would be an easy one: I use Green & Swets Signal detection theory and Psychophysics as the prototype, although the idea has its origins in earlier work among scientists concerned with the fluctuations of photons and their influence on visual detection and sound waves and their influence on audition.
The idea underlying the power of signal detection theory is simple: The world gives noisy data, never pure. Auditory data, for instance, are degraded for a variety of reasons having to do with the physical properties of the communication of sound. The observing organism has properties that further affect how those data will be experienced and interpreted, such as ability (e.g., a person's auditory acuity), the circumstances under which the information is being processed (e.g., during a thunderstorm), and motivation (e.g., disinterest). Signal detection theory allows us to put both aspects of the stimulus and the respondent together to understand the quality of the decision that will result given the uncertain conditions under which data are transmitted, both physically and psychologically.
To understand the crux of signal detection theory, each event of any data impinging on the receiver (human or other) is coded into four categories, providing a language to describe the decision:
Proxemic of Urban Sexuality
Architect, teaching at Politecnico of Milan, visiting professor at Harvard GSD,......
In every room, in every house, in every street, in every city, movements, relations and spaces are also defined with regards to logics of attraction-repulsion between the sexuality of individuals.
Even the most insurmountable ethnic or religious barriers can suddenly disappear with the furor of an intercourse; even the warmest and cohesive community can rapidly dissolve in absence of erotic tension.
To understand how our cosmopolitan and multi-gendered cities work, today we need a Proxemic of Urban Sexuality.
Physics, University of Illinois at Urbana-Champaign...
When you are facing in the wrong direction, progress means walking backwards. History suggests that our world view undergoes disruptive change not so much when science adds new concepts to our cognitive toolkit, but when it takes away old ones. The sets of intuitions that have been with us since birth define our scientific prejudices, and not only are poorly-suited to the realms of the very large and very small, but also fail to describe everyday phenomena. If we are to identify where the next transformation of our world view will come from, we need to take a fresh look at our deep intuitions. In the two minutes that it takes you to read this essay, I am going to try and rewire your basic thinking about causality.
Causality is usually understood as meaning that there is a single, preceding cause for an event. For example in classical physics, a ball may be flying through the air, because of having been hit by a tennis racket. My 16 year-old car always revs much too fast, because the temperature sensor wrongly indicates that the engine temperature is cold, as if the car was in start-up mode. We are so familiar with causality as an underlying feature of reality that we hard-wire it into the laws of physics. It might seem that this would be unnecessary, but it turns out that the laws of physics do not distinguish between time going backwards and time going forwards. And so we make a choice about which sort of physical law we would like to have.
However, complex systems, such as financial markets or the Earth's biosphere, do not seem to obey causality. For every event that occurs, there are a multitude of possible causes, and the extent to which each contributes to the event is not clear, not even after the fact! One might say that there is a web of causation. For example, on a typical day, the stock market might go up or down by some fraction of a percentage point. The Wall Street Journal might blithely report that the stock market move was due to "traders taking profits" or perhaps "bargain-hunting by investors". The following day, the move might be in the opposite direction, and a different, perhaps contradictory, cause will be invoked. However, for each transaction, there is both a buyer and a seller, and their world views must be opposite for the transaction to occur. Markets work only because there is a plurality of views. To assign single or dominant cause to most market moves is to ignore the multitude of market outlooks and to fail to recognize the nature and dynamics of the temporary imbalances between the numbers of traders who hold these differing views.
Similar misconceptions abound elsewhere in public debate and the sciences. For example, are there single causes for diseases? In some cases, such as Huntingdon's disease, the cause can be traced to a unique factor, in this case extra repetitions of a particular nucleotide sequence at a particular location in an individual's DNA, coding for the amino acid glutamine. However, even in this case, the age of onset and the severity of the condition are also known to be controlled by environmental factors and interactions with other genes. The web of causation has been for many decades a well-worked metaphor in epidemiology, but there is still little quantitative understanding of how the web functions or forms. As Krieger poignantly asked in a celebrated 1994 essay, "Has anyone seen the spider?"
The search for causal structure is nowhere more futile than in the debate over the origin of organismal complexity: intelligent design vs. evolution. Fueling the debate is a fundamental notion of causality, that there is a beginning to life, and that such a beginning must have had a single cause. On the other hand, if there is instead a web of causation driving the origin and evolution of life, a skeptic might ask: has anyone seen the spider?
It turns out that there is no spider. Webs of causation can form spontaneously through the concatenation of associations between the agents or active elements in the system. For example, consider the Internet. Although a unified protocol for communication (TCP/IP etc) exists, the topology and structure of the Internet emerged during a frenzied build-out, as Internet service providers staked out territory in a gold-rush of unprecedented scale. Remarkably, once the dust began to settle, it became apparent that the statistical properties of the resulting Internet were quite special: the time delays for packet transmission, the network topology, and even the information transmitted exhibit fractal properties.
However, you look at the Internet, locally or globally, on short time scales or long, it looks exactly the same. Although the discovery of this fractal structure around 1995 was an unwelcome surprise, because standard traffic control algorithms as used by routers were designed assuming that all properties of the network dynamics would be random, the fractality is also broadly characteristic of biological networks. Without a master blueprint, the evolution of an Internet is subject to the same underlying statistical laws that govern biological evolution, and structure emerges spontaneously without the need for a controlling entity. Moreover, the resultant network can come to life in strange and unpredictable ways, obeying new laws whose origin cannot be traced to any one part of the network. The network behaves as a collective, not just the sum of parts, and to talk about causality is meaningless because the behavior is distributed in space and in time.
Between 2.42pm and 2.50pm on May 6 2010, the Dow-Jones Industrial Average experienced a rapid decline and subsequent rebound of nearly 600 points, an event of unprecedented magnitude and brevity. This disruption occurred as part of a tumultuous event on that day now known as the Flash Crash, which affected numerous market indices and individual stocks, even causing some stocks to be priced at unbelievable levels (e.g. Accenture was at one point priced at 1 cent).
With tick-by-tick data available for every trade, we can watch the crash unfold in slow motion, a film of a financial calamity. But the cause of the crash itself remains a mystery. The US Securities and Exchange Commission report on the flash crash was able to identify the trigger event (a $4 billion sale by a mutual fund), but could provide no detailed understanding of why this event caused the crash. The conditions that precipitate the crash were already embedded in the market's web of causation, a self-organized rapidly evolving structure created by the interplay of high frequency trading algorithms. The Flash Crash was the birth cry of a network coming to life, eerily reminiscent of Arthur C. Clarke's science fiction story "Dial F for Frankenstein", which begins "At 0150 GMT on December 1, 1975, every telephone in the world started to ring." I'm excited by the scientific challenge of understanding all this in detail, because … well, never mind. I guess I don't really know.
Professor of Astronomy at Harvard University and Director of the Harvard Origins......
The concept of 'otherness' or 'the Other' is about how a conscious human being perceives their own identity: "Who am I and how do I relate to others?"; a part of what defines the self and is constituent in self-consciousness. It is a philosophical concept widely used in psychology and social science. Recent advances in the life and physical sciences have opened the possibility for new and even unexpected expansions of this concept.
Starting with the map of the human genome, to the diploid human genomes of individuals, and to mapping humans' geographic spread, then moving back in time with the mapping of the Neanderthal genome, these are new tools to address the age-old problem of human unity and human diversity. Reading the 'life code' of DNA does not stop here – it places humans in the vast and colorful mosaic of Earth life. 'Otherness' is placed in totally new light. Our microbiomes – the trillions of microbes on and in each of us, that are essential to a person's physiology, become part of our self.
Astronomy and space science are intensifying the search for life on other planets – from Mars and the outer reaches of the Solar system, to Earth-like planets and super-Earths orbiting other stars. The chances of success may hinge on our understanding of the possible diversity of the chemical basis of life itself. 'Otherness': not among DNA-encoded species, but among life forms using different molecules to encode traits. Our 4-billion-years-old heritage of molecular innovation and design, versus 'theirs'. This is a cosmic first encounter that we might experience in our labs first. Last year's glimpse at JCVI-syn1.0 – the first bacteria controlled completely by a synthetic genome, is a prelude to this brave new field.
It is probably timely to ponder 'otherness' and its wider meaning yet again, as we embark on a new age of exploration. And as T.S. Eliot once predicted, we might arrive where we started and know our self for the first time.
Cognitive Scientist; Author, Kluge: The Haphazard Evolution of the Human Mind...
Hamlet may have said that human beings are noble in reason and infinite in faculty, but in reality — as four decades of experiments in cognitive psychology have shown — our minds are very finite, and far from noble. Knowing the limits of our minds can help us to make better reasoners.
Almost all of those limits start with a peculiar fact about human memory: although we are pretty good at storing information in our brains, we are pretty poor at retrieving that information. We can recognize photos from our high school yearbooks decades later—yet still find it impossible to remember what we had for breakfast yesterday. Faulty memories have been known to lead to erroneous eyewitness testimony (and false imprisonment), to marital friction (in the form of overlooked anniversaries), and even death (skydivers, for example have been known to forget to pull their ripcords — accounting, by one estimate, for approximately 6% of skydiving fatalities).
Computer memory is much more better than human memory because early computer scientists discovered a trick that evolution never did: organizing information according by assigning every memory to a sort of master map, in which each bit of information that is to be stored is assigned a specific, uniquely identifiable location in the computer's memory vaults. Human beings, in contrast. appear to lack such master memory maps, and instead retrieve information in far more haphazard fashion, by using clues (or cues) to what it's looking for, rather than knowing in advance where in the brain a given memory lies.
In consequence, our memories cannot be searched as systematically or as reliably as those of us a computer (or internet database). Instead, human memories are deeply subject to context. Scuba divers, for example, are better at remembering the words they study underwater when they are tested underwater (relative to when they were a tested on land), even if the words have nothing to do with the sea.
Sometimes this sensitivity to context is useful. We are better able to remember what we know about cooking when we are in the kitchen than when we are skiing, and vice versa.
But it also comes at a cost: when we need to remember something in a situation other than the one in which it was stored, it's often hard to retrieve it. One of the biggest challenges in education, for example, is to get children to take what they learn in school and apply it to real world situations, in part because context-driven memory means that what is learned in school tends to stay in school.
Perhaps the most dire consequence is that human beings tend almost invariably to be better at remembering evidence that is consistent with their beliefs than evidence that might disconfirm them. When two people disagree, it is often because their prior beliefs lead them to remember (or focus on) different bits of evidence. To consider something well, of course, is to evaluate both sides of an argument, but unless we also go the extra mile of deliberately forcing ourselves to consider alternatives—not something that comes naturally—we are more prone to recalling evidence consistent with a proposition than inconsistent with it.
Overcoming this mental weakness, known as confirmation bias, is a lifelong struggle; recognizing that we all suffer from it is a important first step.To the extent that we can beware of this limitation in our brains, we can try to work around it, compensating for our in-born tendencies towards self-serving and biased recollections by disciplining ourselves to consider not just the data that might fit with our own beliefs, but also the data that might lead other people to have beliefs that differ from our own.
Mathematician and Economist; Principal, Natron Group...
The sophisticated "scientific concept" with the greatest potential to enhance human understanding may be argued to come not from the halls of academe, but rather from the unlikely research environment of professional wrestling.
Evolutionary biologists Richard Alexander and Robert Trivers have recently emphasized that it is deception rather than information that often plays the decisive role in systems of selective pressures. Yet most of our thinking continues to treat deception as something of a perturbation on the exchange of pure information, leaving us unprepared to contemplate a world in which fakery may reliably crowd out the genuine. In particular, humanity's future selective pressures appear likely to remain tied to economic theory which currently uses as its central construct a market model based on assumptions of perfect information.
If we are to take selection more seriously within humans, we may fairly ask what rigorous system would be capable of tying together an altered reality of layered falsehoods in which absolutely nothing can be assumed to be as it appears. Such a system, in continuous development for more than a century, is known to exist and now supports an intricate multi-billion dollar business empire of pure hokum. It is known to wrestling's insiders as "Kayfabe".
Because professional wrestling is a simulated sport, all competitors who face each other in the ring are actually close collaborators who must form a closed system (called "a promotion") sealed against outsiders. With external competitors generally excluded, antagonists are chosen from within the promotion and their ritualized battles are largely negotiated, choreographed, and rehearsed at a significantly decreased risk of injury or death. With outcomes predetermined under Kayfabe, betrayal in wrestling comes not from engaging in unsportsmanlike conduct, but by the surprise appearance of actual sporting behavior. Such unwelcome sportsmanship which "breaks Kayfabe" is called "shooting" to distinguish it from the expected scripted deception called "working".
Were Kayfabe to become part of our toolkit for the twenty-first century, we would undoubtedly have an easier time understanding a world in which investigative journalism seems to have vanished and bitter corporate rivals cooperate on everything from joint ventures to lobbying efforts. Perhaps confusing battles between "freshwater" Chicago macro economists and Ivy league "Saltwater" theorists could be best understood as happening within a single "orthodox promotion" given that both groups suffered no injury from failing (equally) to predict the recent financial crisis. The decades old battle in theoretical physics over bragging rights between the "string" and "loop" camps would seem to be an even more significant example within the hard sciences of a collaborative intra-promotion rivalry given the apparent failure of both groups to produce a quantum theory of gravity.
What makes Kayfabe remarkable is that it gives us potentially the most complete example of the general process by which a wide class of important endeavors transition from failed reality to successful fakery. While most modern sports enthusiasts are aware of wrestling's status as a pseudo sport, what few alive today remember is that it evolved out of a failed real sport (known as "catch" wrestling) which held its last honest title match early in the 20th century. Typical matches could last hours with no satisfying action, or end suddenly with crippling injuries to a promising athlete in whom much had been invested. This highlighted the close relationship between two paradoxical risks which define the category of activity which wrestling shares with other human spheres:
• A) Occasional but Extreme Peril for the participants.
• B) General: Monotony for both audience and participants.
Kayfabrication (the process of transition from reality towards Kayfabe) arises out of attempts to deliver a dependably engaging product for a mass audience while removing the unpredictable upheavals that imperil participants. As such Kayfabrication is a dependable feature of many of our most important systems which share the above two characteristics such as war, finance, love, politics and science.
Importantly, Kayfabe also seems to have discovered the limits of how much disbelief the human mind is capable of successfully suspending before fantasy and reality become fully conflated. Wrestling's system of lies has recently become so intricate that wrestlers have occasionally found themselves engaging in real life adultery following exactly behind the introduction of a fictitious adulterous plot twist in a Kayfabe back-story. Eventually, even Kayfabe itself became a victim of its own success as it grew to a level of deceit that could not be maintained when the wrestling world collided with outside regulators exercising oversight over major sporting events.
At the point Kayfabe was forced to own up to the fact that professional wrestling contained no sport whatsoever, it did more than avoid being regulated and taxed into oblivion. Wrestling discovered the unthinkable: its audience did not seem to require even a thin veneer of realism. Professional wrestling had come full circle to its honest origins by at last moving the responsibility for deception off of the shoulders of the performers and into the willing minds of the audience.
Kayfabe, it appears, is a dish best served client-side.
It Ain't Necessarily So
Architect, Researcher, MIT; Founder, Materialecology...
Preceding the scientific method is a way of being in the world that defies the concept of a solid, immutable reality. Challenging this apparent reality in a scientific manner can potentially unveil a revolutionary shift in its representation and thus recreate reality itself. Such suspension of belief implies the temporary forfeiting of some explanatory power of old concepts and the adoption of a new set of assumptions in their place.
Reality is the state of things as they actually exist, rather than the state by which they may appear or thought to be — a rather ambiguous definition given our known limits to observation and comprehension of concepts and methods. This ambiguity, captured by the aphorism thatthings are not what they seem, and again with swing in Sportin' Life's songIt Ain't Necessarily So, is a thread that seems to consistently appear throughout the history of science and the evolution of the natural world. In fact, ideas that have challenged accepted doctrines and created new realities have prevailed in fields ranging from warfare to flight technology, from physics to medicinal discoveries.
Recall the battle between David and Goliath mentioned in Gershwin's song. The giant warrior, evidently unbeatable by every measure of reality, is at once defeated by a lyre-playing underdog who challenges this seemingly apparent reality by devising a nearly scientific and unconventional combat strategy.
The postulation that mighty opponents have feeble spots also holds true for the war against ostensibly incurable diseases. Edward Jenner's inoculation experiments with the cowpox virus to build immunity against the deadly scourge of smallpox gave rise to the vaccine that later helped prevent diseases such as Polio, Malaria and HIV. The very idea that an enemy — a disease — is to be overcome exclusively by brute force was defied by the counter-intuitive hypothesis that the disease itself — or a mild version of its toxins — might be internally memorized by the human immune system as a preventive measure.
Da Vinci's flying machine is another case in point. Challenging the myth of Icarus and its moral that humans should not attempt flying, Leonardo designs a hanger glider inspired by his studies into the structure-function relationships of bird wings. This is the first flying machine known to men on the basis of which our entire avionic industry has evolved.
Challenging what was assumed to be the nature of reality, conveniently supported by religious authorities, Copernicus disputes the Ptolemaic model of the heavens, which postulated the Earth at the center of the universe, by providing the heliocentric model with the Sun at the center of our solar system. The Scientific Revolution of the 16th century then followed, laying the foundations for modern science.
But the Gospel takes many forms besides religion or received wisdom. Occasionally the Gospel emerges as science itself at a particular moment in history. Einstein challenged the Gospel of his day by introducing the concept of space-time and upending our perception of the universe.
It Ain't Necessarily So is a drug dealer's attempt to challenge the gospel of religion by expressing doubts in the Bible: the song is indeed immortal, but Sportin' himself does not surpass doubt. In science, Sportin's attitude is an essential first step forward but it ain't sufficientlyso. It is a step that must be followed by scientific concepts and methods. Still, it is worth remembering to take your Gospel with a grain of salt because, sometimes, it ain't nessa, ain't nessa, it ain't necessarily so.
Psychologist, Cornell University...
The human brain is an amazing pattern-detecting machine. We possess a variety of mechanisms that allow us to uncover hidden relationships between objects, events, and people. Without these, the sea of data hitting our senses would surely appear random and chaotic. But when our pattern-detection systems misfire they tend to err in the direction of perceiving patterns where none actually exist.
The German neurologist Klaus Conrad coined the term "Apophenia" to describe this tendency in patients suffering from certain forms of mental illness. But it is increasingly clear from a variety of findings in the behavioral sciences that this tendency is not limited to ill or uneducated minds; healthy, intelligent people make similar errors on a regular basis: a superstitious athlete sees a connection between victory and a pair of socks, a parent refuses to vaccinate her child because of a perceived causal connection between inoculation and disease, a scientist sees hypothesis-confirming results in random noise, and thousands of people believe the random "shuffle" function on their music software is broken because they mistake spurious coincidence for meaningful connection.
In short, the pattern-detection that is responsible for so much of our species' success can just as easily betray us. This tendency to oversee patterns is likely an inevitable by-product of our adaptive pattern-detecting mechanisms. But the ability to acknowledge, track, and guard against this potentially dangerous tendency would be aided if the simple concept of "everyday Apophenia" were an easily accessible concept.
Senior fellow for environmental understanding at Pace University's Academy...
To sustain progress on a finite planet that is increasingly under human sway, but also full of surprises, what is needed is a strong dose ofanthropophilia. I propose this word as shorthand for a rigorous and dispassionate kind of self regard, even self appreciation, to be employed when individuals or communities face consequential decisions attended by substantial uncertainty and polarizing disagreement.
The term is an intentional echo of Ed Wilson's valuable effort to nurturebiophilia, the part of humanness that values and cares for the facets of the non-human world we call nature. What's been missing too long is an effort to fully consider, even embrace, the human role within nature and — perhaps more important still — to consider our own inner nature, as well.
Historically, many efforts to propel a durable human approach to advancement were shaped around two organizing ideas: "woe is me" and "shame on us," with a good dose of "shame on you" thrown in.
Woe is paralytic, while blame is both divisive and often misses the real target. (Who's the bad guy, BP or those of us who drive and heat with oil?)
Discourse framed around those concepts too often produces policy debates that someone once described to me, in the context of climate, as "blah, blah, blah bang." The same phenomenon can as easily be seen in the unheeded warnings leading to the most recent financial implosion and the attack on the World Trade Center.
More fully considering our nature — both the "divine and felonious" sides, as Bill Bryson has summed us up — could help identify certain kinds of challenges that we know we'll tend to get wrong.
The simple act of recognizing such tendencies could help refine how choices are made — at least giving slightly better odds of getting things a little less wrong the next time. At the personal level, I know when I cruise into the kitchen tonight I'll tend to prefer to reach for a cookie instead of an apple. By pre-considering that trait, I might have a slightly better chance of avoiding a couple of hundred unnecessary calories.
Here are a few instances where this concept is relevant on larger scales.
There's a persistent human pattern of not taking broad lessons from localized disasters. When China's Sichuan province was rocked by a severe earthquake, tens of thousands of students (and their teachers) died in collapsed schools. Yet the American state of Oregon, where more than a thousand schools are already known to be similarly vulnerable when the great Cascadia fault off the Northwest Coast next heaves, still lags terribly in speeding investments in retrofitting.
Sociologists understand with quite a bit of empirical backing why this disconnect exists even though the example was horrifying and the risk in Oregon is about as clear as any scientific assessment can be. But does that knowledge of human biases toward the "near and now" get taken seriously in the realms where policies are shaped and the money to carry them out is authorized? Rarely, it seems.
Social scientists also know, with decent rigor, that the fight over human-driven global warming — both over the science and policy choices — is largely cultural. As in many other disputes (consider health care) the battle is between two quite fundamental subsets of human communities — communitarians (aka, liberals) and individualists (aka, libertarians). In such situations, a compelling body of research has emerged showing how information is fairly meaningless. Each group selects information to reinforce a position and there are scant instances where information ends up shifting a position.
That's why no one should expect the next review of climate science from the Intergovernmental Panel on Climate Change to suddenly create a harmonious path forward.
The more such realities are recognized, the more likely it is that innovative approaches to negotiation can build from the middle, instead of arguing endlessly from the edge. The same body of research on climate attitudes, for example, shows far less disagreement on the need for advancing the world's limited menu of affordable energy choices.
Murray Gell-Mann has spoken often of the need, when faced with multi-dimensional problems, to take a "crude look at the whole" — a process he has even given an acronym, CLAW. It's imperative, where possible, for that look to include an honest analysis of the species doing the looking, as well.
There will never be a way to invent a replacement for, say, the United Nations or the House of Representatives. But there is a ripe opportunity to try new approaches to constructive discourse and problem solving, with the first step being an acceptance of our humanness, for better and worse.
The Dece(i)bo Effect
Professor of Medicine at UCSD...
The Dece(i)bo Effect — think portmanteau of Deceive and Placebo — refers to the facile application of constructs, without unpackaging the concept and the assumptions on which it relies, in a fashion that, rather than benefiting thinking, leads reasoning astray.
Words and phrases enter common parlance, that capture a concept: Occam's razor, placebo, Hawthorne effect. Such phrases and code-words in principle facilitate discourse — and can indeed do so. Deploying the word or catchphrase adds efficiency to the interchange, by obviating the need for pesky review of the principles and assumptions encapsulated in the word.
Unfortunately, bypassing the need to articulate the conditions and assumptions on which validity of the construct rests, may lead to bypassing consideration of whether these conditions and assumptions legitimately apply. Use of the term can then, far from fostering sound discourse, serve to undermine it.
Take, for example, the "placebo," and "placebo effects." Unpackaging the terms, a "placebo" is in principle something that is physiologically "inert" — but believed by the recipient to be active, or possibly so. The term "placebo effect" refers to improvement of a condition when persons have been placed on a placebo, due to effects of expectation/suggestion.
With these terms well ensconced in the vernacular, Dece(i)bo Effects associated with them are much in evidence. Key presumptions regarding placebos and placebo effects are more typically wrong than not.
1. When hearing the word "placebo," scientists often presume "inert" - without stopping to ask: what is that allegedly physiologically inert substance? Indeed, even in principle, what could it be??
There isn't anything known to be physiologically inert. There are no regulations about what constitute placebos; and their composition — commonly determined by the manufacturer of the drug under study — is typically undisclosed. Among the uncommon cases where placebo composition has been noted, there are documented instances in which the placebo composition apparently produced spurious effects. Two studies used corn oil and olive oil placebos for cholesterol-lowering drugs: one noted that the "unexpectedly" low rate of heart attacks in the control group may have contributed to failure to see a benefit from the cholesterol drug. Another study noted "unexpected" benefit of a drug to gastrointestinal symptoms in cancer patients. But cancer patients bear increased likelihood of lactose intolerance — and the placebo was lactose, a "sugar pill." When the term "placebo" substitutes for actual ingredients, any thinking about how the composition of the control agent may have influenced the study is circumvented.
2. Because there are many settings in which persons with a problem, given placebo, report sizeable improvement on average when they are re-queried (see 3), many scientists have accepted that "placebo effects" — of suggestion — are both large in magnitude and widespread in the scope of what they benefit.
The Danish researcher Asbjørn Hróbjartsson conducted a systematic review of studies that compared a placebo to no treatment. He found that the placebo generally does: nothing. In most instances, there is no placebo effect. Mild "placebo effects" are seen, in the short term, for pain and anxiety. Placebo effects for pain are reported to be blocked by naloxone, an opiate antagonist — specifically implicating endogenous opiates in pain placebo effects, which would not be expected to benefit every possible outcome that might be measured.
3. When hearing that persons with a problem placed on a "placebo" report improvement, scientists commonly presume this must be due to the "placebo effect" - the effect of expectation/suggestion.
However, the effects are usually something else entirely. For instance: natural history of the disease, and regression to the mean. Consider a distribution, such as a bell-shape. Whether the outcome of interest is pain, blood pressure, cholesterol, or other, persons are classically selected for treatment if they are at one end of the distribution - say, the high end. But these outcomes are quantities that vary (for instance from physiological variation, natural history, measurement error...), and on average the high values will vary back down — a phenomenon termed "regression to the mean" that operates, placebo or no. (Hence, Hróbjartsson's findings.)
A different dece(i)bo problem beset Ted Kaptchuk's recent Harvard study in which researchers gave a "placebo," or nothing, to people afflicted with irritable bowel syndrome. They administered the placebo in a bottle boldly labeled "Placebo," and advised patients they were receiving placebos, which were known to be potent. The thesis was that one might harness the effects of expectation honestly, without deception, by telling subjects how powerful placebos in fact were - and by developing a close relationship with subjects. Researchers met repeatedly with subjects, gained subjects' appreciation for their concern and listening (as the researchers made clear), and repeatedly told subjects that placebos are powerful. Those placed on placebo obliged the researchers by telling them they had gotten better, moreso than those on nothing. The scientists attributed this to a placebo effect.
But what's to say patients weren't simply telling the scientists what they thought the scientists wished to hear? Such desire to please (a form, perhaps, of "social approval" reporting bias) had fertile grounds in which to operate and create what was interpreted as a placebo effect — which implies actual subjective benefit to symptoms. One wonders if so great an error of presumption would operate were there not an existing term, "placebo effect," to signify the interpretation the Harvard group chose.
Another explanation consistent with these results is specific physiological benefit. The study used a nonabsorbed fiber — microcrystalline cellulose — as the "Placebo" that subjects were told would be effective. The authors are applauded for disclosing its composition. But other nonabsorbed fibers benefit both constipation and diarrhea — symptoms of irritable bowel — and are prescribed for that purpose; psyllium is an example. Thus, specific physiological benefit of the "Placebo" to symptoms cannot be excluded.
Together these points illustrate that the term "placebo" cannot be presumed to imply "inert" (and generally does not); and that when studies see large benefit to symptoms in patients treated with "placebo" (expected from distribution considerations alone), one cannot infer these arose from large benefits of suggestion to symptoms (which evidence indicates may seldom operate).
Thus, rather than facilitating sound reasoning, evidence suggests that in many cases, including high stakes settings in which inferences may propagate to medical practice, substitution of a term — here, "placebo," "placebo effect" — for the concepts they are intended to convey, may actually thwart or bypass critical thinking about key issues, with implications to fundamental concerns for us all.
A Statistically Significant Difference in Understanding the Scientific Process
Professor, Claremont McKenna College; Past-president, American Psychological......
Statistically significant difference — It is a simple phrase that is essential to science and that has become common parlance among educated adults. These three words convey a basic understanding of the scientific process, random events, and the laws of probability. The term appears almost everywhere that research is discussed — in newspaper articles, advertisements for "miracle" diets, research publications, and student laboratory reports, to name just a few of the many diverse contexts where the term is used. It is a short hand abstraction for a sequence of events that includes an experiment (or other research design), the specification of a null and alternative hypothesis, (numerical) data collection, statistical analysis, and the probability of an unlikely outcome. That is a lot of science conveyed in a few words.
It would be difficult to understand the outcome from any research without at least a rudimentary understanding of what is meant by the conclusion that the researchers found or did not find evidence of a "statistically significant difference." Unfortunately, the old saying that "a little knowledge is a dangerous thing" applies to the partial understanding of this term. One problem is that "significant" has a different meaning when used in everyday speech than when used to report research findings.
Most of the time, the word "significant" means that something important happened. For example, if a physician told you that you would feel significantly better following surgery, you would correctly infer that your pain would be reduced by a meaningful amount—you would feel less pain. But, when used in "statistically significant difference," the term "significant" means that the results are unlikely to be due to chance (if the null hypothesis were true); the results may or may not be important. In addition, sometimes, the conclusion will be wrong because researcher can only assert their conclusion at some level of probability. "Statistically significant difference" is a core concept in research and statistics, but as anyone who was taught undergraduate statistics or research methods can tell you, it is not an intuitive idea.
Despite the fact that "statistically significant difference" communicates a cluster of ideas that are essential to the scientific process, there are many pundits who would like to see it removed from our vocabulary because it is frequently misunderstood. Its use underscores the marriage of science and probability theory, and despite its popularity, or perhaps because of it, some experts have called for a divorce because the term implies something that it does not, and the public is often misled. In fact, experts are often misled as well. Consider this hypothetical example: In a well-done study that compares the effectiveness of two drugs relative to a placebo, it is possible that Drug X is statistically significantly different from a placebo and Drug Y is not, yet Drugs X and Y might not be statistically significant different from each other. This could result when Drug X is statistically different from placebo at a probability level of p < .04, but Drug Y is statistically significantly different from a placebo only at a probability level of p < .06, which is higher than most a priori levels used to test for statistical significance. If just reading about this makes your head hurt, you are among the masses who believe they understand this critical shorthand phrase which is at the heart of the scientific method, but actually may have a shallow-level of understanding.
There are many critically important ways that findings of "statistically significant difference" can be misleading. But, even though there are real problems with understanding this term, it is firmly entrenched in everyday discussions of research, and for the general public, it shows some knowledge of the process of science.
A better understanding of the pitfalls associated with this term would go a long way toward improving our "cognitive toolkits." If common knowledge of what this term means included the ideas that a) the findings may not be important and b) conclusions based on finding or failure to find statistically significant differences may be wrong, then we would have significantly advanced general knowledge. When people read or use the term "statistically significant difference," it is an affirmation of the scientific process, which, for all of its limitations and misunderstandings, is a significant advance over alternative ways of knowing about the world. If we could just add two more key concepts to the meaning of that phrase, we could improve how the general public thinks about science.
The Senses and the Multi-Sensory
Professor & Director, Institute of Philosophy School of Advanced Study University......
For far too long we have laboured under a faulty conception of the senses. Ask anyone you know how many senses we have and they will probably say five; unless they start talking to you about a sixth sense. But why pick five? What of the sense of balance provided by the vestibular system, telling you whether you are going up or down in a lift, forwards or backwards on a train, or side to side on a boat? What about proprioception that gives you a firm sense of where your limbs are when you close your eyes? What about feeling pain, hot and cold? Are these just part of touch, like feeling velvet or silk? And why think of sensory experiences like seeing, hearing, tasting, touching and smelling as being produced by a single sense?
Contemporary neuroscientists have postulated two visual systems — one responsible for how things look to us, the other for controlling action — that operate independently of one another. The eye may fall for visual illusions but the hand does not, reaching smoothly for a shape that looks larger than it is to the observer.
And it doesn't stop here. There is good reason to think that we have two senses of smell: an external sense of smell, orthonasal olfaction, produced by inhaling, that enables us to detect things in the environment such food, predators or smoke; and internal sense, retronasal olfaction, produced by exhaling, that enables us to detect the quality of what we have just eaten, allowing us to decide whether we want any more or should expel it.
Associated with each sense of smell is a distinct hedonic response. Orthonasal olfaction gives rise to the pleasure of anticipation. Retronasal olfaction gives rise to the pleasure of reward. Anticipation is not always matched by reward. Have you ever noticed how the enticing aromas of freshly brewed coffee are never quite matched by the taste? There is always a little disappointment. Interestingly, the one food where the intensity of orthonsally and retronasally judged aromas match perfectly is chocolate. We get just what we expected, which may explain why chocolate is such a powerful stimulus.
Besides the proliferation of the senses in contemporary neuroscience, another major change is taking place. We used to study the senses in isolation, with the greatest majority of researchers focusing on vision. Things are rapidly changing. We now know that the senses do not operate in isolation, but combine at both early and late stages of processing to produce our rich perceptual experiences of our surroundings. It is almost never the case that our experience presents us with just sights or sounds. We are always enjoying conscious experiences made up of sights and sounds, smells, the feel of our body, the taste in our mouths; and yet these are not presented as separate sensory parcels. We simply take in the rich and complex scene without giving much thought to how the different contributors produce the whole experience.
We give little thought to how smell provides a background to every conscious waking moment. People who lose their sense of smell can be plunged into depression and show less sign of recovery a year later than people who lose their sight. This is because familiar places no longer smell the same, and people no longer have their reassuring olfactory signature. Also, patients who lose their smell believe they have lost their sense of taste. When tested, they acknowledge that that can taste sweet, sour, salt, bitter savoury, and metallic. But everything else, missing from the taste of what they are eating, is due to retronasal smell.
What we call taste is one of the most fascinating case studies for how inaccurate our view of our senses is: it is not produced by the tongue alone but is always an amalgam of taste, touch and smell. Touch contributes to sauces tasting creamy, and other foods tasting chewy, crisp, or stale. The only difference between potato chips, which "taste" fresh or stale, is a difference in texture. The largest part of what we call "taste" is in fact smell in the form of retronasal olfaction, which is why people who lose their ability to smell say they can no longer taste anything. Taste, touch and smell are not merely combined to produce experiences of foods of liquids, rather the information from the separate sensory channels is fused into a unified experience of that we call taste and food scientists call flavour.
Flavour perception is the result of multi-sensory integration of gustatory, olfactory and oral somatosenory information into a single experience whose components we are unable to distinguish. It is one of the most multi-sensory experiences we have and can be influenced by both sight and sound. The colours of wines and the sounds food make when we bite or chew them can have large impacts on our resulting appreciation and assessment, and irritation of the trigeminal nerve in the face will make chillies feel "hot" and menthol feel "cool" in the mouth without any actual change in temperature.
In sensory perception, multi-sensory integration is the rule not the exception. In audition, we don't just hear with our ears, we use our eyes to locate the apparent sources of sounds in the cinema where we "hear" the voices coming from the actors' mouths on the screen although the sounds are coming from the sides of the theatre. This is known as the ventriloquism effect. Similarly, retronasal odours detected by olfactory receptors in the nose are experienced as tastes in the mouth. The sensations get re-located to the mouth because oral sensations of chewing or swallowing capture our attention, making us think these olfactory experiences are occurring in the same place.
Other surprising collaboration among the senses are due to cross-modal effects, where stimulation of one sense boosts activity in another. Looking at someone's lips across a crowded room can improve our ability to hear what they are saying, and the smell of vanilla can make a liquid we sip "taste" sweeter, and less sour. This is why we say vanilla is sweet smelling, although sweet is a taste, and pure vanilla is not sweet at all. Industrial manufacturers know about these effects and exploit them. Certain aromas in shampoos, for example, can make the hair "feel" softer; and red coloured drinks "taste" sweet, while drinks with a light green colour "taste" sour. In many of these interactions vision will dominate; but not in every case
. For anyone unlucky enough to have disturbance in their vestibular system they will feel the world is spinning although cues from the eyes and the body should be telling them everything is still. Instead, the brain goes with the combined picture and vision and proprioception fall in line. Luckily, our senses cooperate and we get us around the world, and the world we inhabit is not a sensory, but a multisensory world.
We humans are terrible at dealing with probability. We are not merely bad at it, but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our wellbeing. This incompetence is reflected in our language, in which the common words used to convey likelihood are "probably" and "usually" — vaguely implying a 50% to 100% chance. Going beyond crude expression requires awkwardly geeky phrasing, such as "with 70% certainty," likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness — the inability to deal with probability — may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.
Imagine the typical emotional reaction to seeing a spider: fear, ranging from minor trepidation to terror. But what is the likelihood of dying from a spider bite? Fewer than four people a year (on average) die from spider bites, establishing the expected risk of death-by-spider at lower than one in a hundred million. This risk is so minuscule that it is actually counterproductive to worry about it! Millions of people die each year from stress-related illnesses.
The startling implication is that the risk of being bitten and killed by a spider is less than the risk that being afraid of spiders will kill you from increased stress. Our irrational fears and inclinations are costly. The typical reaction to seeing a sugary donut is the desire to consume it. But, given the potential negative impact of that donut, including the increased risk of heart disease and reduction in overall health, our reaction should rationally be one of fear and revulsion. It may seem absurd to fear a donut — or, even more dangerous, a cigarette — but this reaction rationally reflects the potential negative impact on our lives.
We are especially ill-equipped to manage risk when dealing with small likelihoods of major events. This is evidenced by the success of lotteries and casinos at taking peoples' money, but there are many other examples. The likelihood of being killed by terrorism is extremely low, yet we have instituted actions to counter terrorism that significantly reduce our quality of life. As a recent example, x-ray body scanners could increase the risk of cancer to a degree greater than the risk from terrorism — the same sort of counterproductive overreaction as the one to spiders. This does not imply we should let spiders, or terrorists, crawl all over us — but the risks need to be managed rationally.
Socially, the act of expressing uncertainty is a display of weakness. But our lives are awash in uncertainty, and rational consideration of contingencies and likelihoods is the only sound basis for good decisions. As another example, a federal judge recently issued an injunction blocking stem cell research funding. The shallowly viewed implication is that some scientists won't be getting money; but what is really at stake is much more important. The probability that stem cell research will quickly lead to life saving medicine is low, but, if successful, the positive impact could be huge. If one considers outcomes and approximates the probabilities, the conclusion is that the judge's decision destroyed the lives of thousands of people, based on probabilistic expectation.
How do we make rational decisions based on contingencies? That judge didn't actually cause thousands of people to die... or did he? If we follow the "many worlds" interpretation of quantum physics — the most direct interpretation of its mathematical description — then our universe is continually branching into all possible contingencies, and there is a world in which stem cell research saves millions of lives, and another world in which people die because of the judge's decision. Using the "frequentist" method of calculating probability, we have to add the probabilities of the worlds in which an event occurs to obtain the probability of that event.
Quantum mechanics dictates that the world we experience will happen according to this probability — the likelihood of the event. In this bizarre way, quantum mechanics reconciles the frequentist and "Bayesian" points of view, equating the frequency of an event over many possible worlds with its likelihood. An "expectation value," such as the expected number of people killed by the judge's decision, is the number of people killed in the various contingencies, weighted by their probabilities. This expected value is not necessarily likely to happen, but is the weighted average of the expected outcomes — useful information when making decisions. In order to make good decisions about risk we need to become better at these mental gymnastics, improve our language, and retrain our intuition.
Perhaps the best arena for honing our skills and making precise probabilistic assessments would be a betting market — an open site for betting on the outcomes of many quantifiable and socially significant events. In making good bets, all the tools and shorthand abstractions of Bayesian inference come into play — translating directly to the ability to make good decisions. With these skills, the risks we face in everyday life would become clearer, and we would develop more rational intuitive responses to uncalculated risks, based on collective rational assessment and social conditioning.
We might get over our excessive fear of spiders, and develop a healthy aversion to donuts, cigarettes, television, and stressful full-time employment. We would become more aware of the low cost compared to probable rewards of research, including research into improving the quality and duration of human life. And, more subtly, as we became more aware and apprehensive of ubiquitous vague language, such as "probably" and "usually," our standards of probabilistic description would improve.
Making good decisions requires concentrated mental effort; and if we overdo it we run the risk of being counterproductive through increased stress and wasted time. So it's best to balance, and play, and take healthy risks — as the greatest risk is that we'll get to the end of our lives having never risked them on anything.
Researcher, MIT Mind Machine Project...
The concept of cause and effect is better understood as the flow of information between two connected events, from the earlier event to the later one. Saying "A causes B" sounds precise, but is actually very vague. I would specify much more by saying "with the information that A has happened, I can compute with almost total confidence* that B will happen." The latter rules out the possibility that other factors could prevent B even if A does happen, but allows the possibility that other factors could cause B even if A doesn't happen.
As shorthand, we can say that one set of information "specifies" another if the latter can be deduced or computed from the former. Note that this doesn't only apply to one-bit sets of information, like the occurrence of a specific event. It can also apply to symbolic variables (given the state of the Web, the results you get from a search engine are specified by your query), numeric variables (the number read off a precise thermometer is specified by the temperature of the sensor), or even behavioral variables (the behavior of a computer is specified by the bits loaded in its memory).
But let's take a closer look at the assumptions we're making. Astute readers may have noticed that in one of my examples, I assumed that the entire state of the Web was a constant. How ridiculous! In mathematical parlance, assumptions are known as "priors," and in a certain widespread school of statistical thought, they are considered the most important aspect of any process involving information. What we really want to know is if, given a set of existing priors, adding one piece of information (A) would allow us to update our estimate of the likelihood of another piece of information (B). Of course, this depends on the priors — for instance, if our priors include absolute knowledge of B, then an update will not be possible.
If, for most reasonable sets of priors, information about A would allow us to update our estimate of B, then it would seem there is some sort of causal connection between the two. But the form of the causal connection is unspecified — a principle often called "correlation does not imply causation." The reason for this is that the essence of causation as a concept rests on our tendency to have information about earlier events before we have information about later events. (The full implications of this concept on human consciousness, the second law of thermodynamics, and the nature of time are interesting, but sadly outside the scope of this essay.)
If information about all events always came in the order they occurred, then correlation would indeed imply causation. But, in the real world, not only are we limited to observing events in the past, but we may also discover information about those events out of order. Thus, the correlations we observe could be reverse causes (information about A allows us to update our estimate of B, although B happened first and thus was the cause of A) or even more complex situations (e.g. information about A allows us to update our estimate of B, but is also giving us information about C, which happened before either A or B and caused both).
Information flow is symmetric: if information about A were to allow us to update our estimate of B, then information about B would allow us to update our estimate of A. But since we cannot change the past or know the future, these constraints are only useful to us when contextualized temporally and arranged in order of occurrence. Information flow is always from the past to the future, but in our minds, some of the arrows may be reversed. Resolving this ambiguity is essentially the problem that science was designed to solve. If you can master the technique of visualizing all information flow and keeping track of your priors, then the full power of the scientific method — and more — is yours to wield from your personal cognitive toolkit.
* In our universe, too many things are interconnected for absolute statements of any kind, so we usually relax our criteria; for instance, "total confidence" might be relaxed from a 0% chance of being wrong to, say, a 1 in 3 quadrillion chance of being wrong — about the chance that, as you finish this sentence, all of humanity will be wiped out by a meteor.
Ambient Memory And The Myth Of Neutral Observation
Tech Culture Journalist; Partner, Contributor, Co-editor, Boing Boing; Executive......
Like others whose early life experiences were punctuated with trauma, my memory has holes. Some of those holes are as wide as years. Others, just big enough to swallow painful incidents that lasted moments, but reverberated for decades.
The brain-record of those experiences sometimes submerges, then resurfaces, sometimes submerging again over time. As I grow older, stronger, and more capable of contending with memory, I become more aware of how different my own internal record may be from others who lived the identical moment.
Each of us commit our experiences to memory and permanence differently. Time and human experience are not linear, nor is there one and only one neutral record of each lived moment. Human beings are impossibly complex tarballs of muscle, blood, bone, breath, and electrical pulses that travel through nerves and neurons; we are bundles of electrical pulses carrying payloads, pings hitting servers. And our identities are inextricably connected to our environments: no story can be told without a setting.
My generation is the last generation of human beings who were born into a pre-internet world, but who matured in tandem with that great, networked hive-mind. In the course of my work online, committing new memories to network mind each day, I have come to understand that our shared memory of events, of truths, of biography, and of fact-- all of this shifts and ebbs and flows, just as our most personal memories do.
Ever-edited Wikipedia replaces paper encyclopedias. The chatter of Twitter eclipses fixed-form and hierarchical communication. The news flow we remember from our childhoods, a single voice of authority on one of three channels, is replaced by something hyper-evolving, chaotic, and less easily defined. Even the formal histories of State may be rewritten by the likes of Wikileaks, and its yet-unlaunched children.
Facts are more fluid than in the days of our grandfathers. In our networked mind, the very act of observation--reporting or tweeting or amplifying some piece of experience--changes the story. The trajectory of information, the velocity of this knowledge on the network, changes the very nature of what is remembered, who remembers it, and for how long it remains part of our shared archive. There are no fixed states.
So must our notion of memory and record evolve.
The history we are creating now is alive. Let us find new ways of recording memory, new ways of telling the story, that reflect life. Let us embrace this infinite complexity as we commit new history to record.
Let us redefine what it means to remember.
Living is fatal
Quantum Mechanical Engineer, MIT; Author, Programming the Universe...
The ability to reason clearly in the face of uncertainty.
If everybody could learn to deal better with the unknown, then it would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps), but the chances for humanity as a whole.
A well-developed scientific method for dealing with the unknown has existed for many years — the mathematical theory of probability. Probabilities are numbers whose values reflect how likely different events are to take place. People are bad at assessing probabilities. They are bad at it not just because they are bad at addition and multiplication. Rather, people are bad at probability in a deep, intuitive level: they overestimate the probability of rare but shocking events -- a burglar breaking into your bedroom while you're asleep, say. Conversely, they underestimate the probability of common, but quiet and insidious events — the slow accretion of globules of fat on the walls of an artery, or another ton of carbon dioxide pumped into the atmosphere.
I can't say that I'm very optimistic about the odds that people will learn to understand the science of odds. When it comes to understanding probability, people basically suck. Consider the following example, based on a true story, and reported by Joel Cohen of Rockefeller University. A group of graduate students note that women have an significantly lower chance of admission than men to the graduate programs at a major university. The data are unambiguous: women applicants are only two thirds as likely as male applicants to be admitted. The graduate students file suit against the university, alleging discrimination on the basis of gender. When admissions data are examined on a department by department basis, however, a strange fact emerges: within each department, women are MORE likely to be admitted than men. How can this possibly be?
The answer turns out to be simple, if counterintuitive. More women are applying to departments that have few positions. These departments admit only a small percentage of applicants, men or women. Men, by contrast, are applying to departments that have more positions and that admit a higher percentage of applicants. Within each department, women have a better chance of admission than men — it's just that few women apply to the departments that are easy to get into.
This counterintuitive result indicates that the admissions committees in the different departments are not discriminating against women. That doesn't mean that bias is absent. The number of graduate fellowships available in a particular field is determined largely by the federal government, which chooses how to allocate reserach funds to different fields. It is not university that is guilty of sexual discrimination, but the society as a whole, which chose to devote more resources — and so more graduate fellowships — to the fields preferred by men.
Of course, some people are good at probability. A car insurance company that can't accurately determine the probabilities of accidents will go broke. In effect, when we pay out premiums to insure ourselves against a rare event, we are buying into the insurance company's estimate of just how likely that event is. Driving a car is one of those common but dangerous processes where human beings habitually understimate the odds of something bad happening, however. Accordingly, some are disinclined to obtain car insurance (perhaps not suprising when the considerable majority of people rate themselves as better than average drivers). When a state government requires its citizens to buy car insurance, it does so because it figures, rightly, that people are underestimating the odds of an accident.
Let's consider the debate over whether health insurance should be required by law. Living, like driving, is a common but dangerous process where people habitually underestimate risk, despite that fact that, with probability equal to one, living is fatal.
Associate Professor of Psychology and Neuroscience; Stanford University...
Since different visiting teachers had promoted contradictory philosophies, the villagers asked the Buddha whom they should believe. The Buddha advised: “When you know for yourselves ... these things, when performed and undertaken, conduce to well-being and happiness — then live and act accordingly.” Such empirical advice might sound surprising coming from a religious leader, but not from a scientist.
“See for yourself” is an unspoken credo of science. It is not enough to run an experiment and report the findings. Others who repeat that experiment must find the same thing. Repeatable experiments are called “replicable.” Although scientists implicitly respect replicability, they do not typically explicitly reward it.
To some extent, ignoring replicability comes naturally. Human nervous systems are designed to respond to rapid changes, ranging from subtle visual flickers to pounding rushes of ecstasy. Fixating on fast change makes adaptive sense — why spend limited energy on opportunities or threats that have already passed? But in the face of slowly growing problems, “change fixation” can prove disastrous (think of lobsters in the cooking pot or people under greenhouse gases).
Cultures can also promote change fixation. In science, some high profile journals and even entire fields emphasize novelty, consigning replications to the dustbin of the unremarkable and unpublishable. More formally, scientists are often judged based on their work’s novelty rather than replicability. The increasingly popular “h-index” quantifies impact by assigning a number (h) which indicates that an investigator has published h papers that have been cited h or more times (so, Joe Blow has an h-index of 5 if he has published 5 papers, each of which others have cited 5 or more times). While impact factors correlate with eminence in some fields (e.g., physics), problems can arise. For instance, Doctor Blow might boost his impact factor by publishing controversial (thus, cited) but unreplicable findings.
Why not construct a replicability (or “r”) index to complement impact factors? As with h, r could indicate that a scientist has originally documented r separate effects that independently replicate r or more times (so, Susie Sharp has an r-index of 5 if she has published 5 independent effects, each of which others have replicated 5 or more times). Replication indices would necessarily be lower than citation indices, since effects have to first be published before they can be replicated, but might provide distinct information about research quality. As with citation indices, replication indices might even apply to journals and fields, providing a measure that can combat biases against publishing and publicizing replications.
A replicability index might prove even more useful to nonscientists. Most investigators who have spent significant time in the salt mines of the laboratory already intuit that most ideas don’t pan out, and those that do sometimes result from chance or charitable interpretations. Conversely, they also recognize that replicability means they’re really on to something. Not so for the general public, who instead encounter scientific advances one cataclysmic media-filtered study at a time. As a result, laypeople and journalists are repeatedly surprised to find the latest counterintuitive finding overturned by new results. Measures of replicability could help channel attention towards cumulative contributions. Along these lines, it is interesting to consider applying replicability criteria to public policy interventions designed to improve health, enhance education, or curb violence. Individuals might even benefit from using replicability criteria to optimize their personal habits (e.g., more effectively dieting, exercising, working, etc.).
Replication should be celebrated rather than denigrated. Often taken for granted, replicability may be the exception rather than the rule. As running water resolves rock from mud, so can replicability highlight the most reliable findings, investigators, journals, and even fields. More broadly, replicability may provide an indispensable tool for evaluating both personal and public policies. As suggested in the Kalama Sutta, replicability might even help us decide whom to believe.
Phase Transitions And Scale Transitions: Conceptualizing Unexpected Changes Due To Scale
Computational Legal Scholar; Fellow, Yale Law School Internet and Society Project...
Physicists created the term "phase transition" to describe a change of state in a physical system, such as liquid to gas. The concept has since been applied in a variety of academic circles to describe other types of systems, from social transformations (think hunter-gatherer to farmer) to statistics (think abrupt changes in algorithm performance as parameters change), but has not yet emerged as part of the common lexicon.
One interesting aspect of the concept of the phrase transition is that it describes a shift to a state seemingly unrelated to the previous one, and hence provides a model for phenomena that challenge our intuition. With only knowledge of water as a liquid, who would have imagined a conversion to gas with the application of heat? The mathematical definition of a phase transition in the physical context is well-defined, but even without this precision I argue this idea can be usefully extrapolated to describe a much broader class of phenomena today, particularly those that change abruptly and unexpectedly with an increase in scale.
Imagine points in 2 dimensions — a spray of dots on a sheet of paper. Now imagine a point cloud in three dimensions, say, dots hovering in the interior of a cube. Even if we could imagine points in four dimensions would we have guessed that all these points lie on the convex hull of this point cloud? In dimensions greater than three they always do. There hasn't been a phase transition in the mathematical sense, but as dimension is scaled up the system shifts in a way we don't intuitively expect.
I call these types of changes "scale transitions:" unexpected outcomes resulting from increases in scale. For example, increases in the number of people interacting in a system can produce unforeseen outcomes: the operation of markets at large scales is often counterintuitive, think of the restrictive effect rent control laws can have on the supply of affordable rental housing or how minimum wage laws can reduce the availability of low wage jobs (James Flynn gives "markets" as an example of a "shorthand abstraction," here I am interested in the often counterintuitive operation of a market system at large scale); the serendipitous effects of enhanced communication, for example collaboration and interpersonal connection generating unexpected new ideas and innovation; or the counterintuitive effect of massive computation in science reducing experimental reproducibility as data and code have proved harder to share than their descriptions. The concept of the scale transition is purposefully loose, designed as a framework for understanding when our natural intuition leads us astray in large scale situations.
This contrasts from Merton's concept of "unanticipated consequences" in that a scale transition refers both to a system, rather than individual purposeful behavior, and is directly tied to the notion of changes due to scale increases. Our intuition regularly seems to break down with scale and we need a way of conceptualizing the resulting counterintuitive shifts in the world around us. Perhaps the most salient feature of the digital age is its facilitation of massive increases in scale, in data storage, processing power, connectivity, thus permitting us to address an unparalleled number of problems on an unparalleled scale. As technology becomes increasingly pervasive I believe scale transitions will become commonplace.
Personal data mining
Editor of the UK edition of WIRED magazine...
From the dawn of civilisation until 2003, Eric Schmidt is fond of saying, humankind generated five exabytes of data. Now we produce five exabytes every two days — and the pace is accelerating. In our post-privacy world of pervasive social-media sharing, GPS tracking, cellphone-tower triangulation, wireless sensor monitoring, browser-cookie targeting, face-recognition detecting, consumer-intention profiling, and endless other means by which our personal presence is logged in databases far beyond our reach, citizens are largely failing to benefit from the power of all this data to help them make smarter decisions. It's time to reclaim the concept of data mining from the marketing industry's microtargeting of consumers, the credit-card companies' anti-fraud profiling, the intrusive surveillance of state-sponsored Total Information Awareness. We need to think more about mining our own output to extract patterns that turn our raw personal datastream into predictive, actionable information. All of us would benefit if the idea of personal data mining were to enter popular discourse.
Microsoft saw the potential back in September 2006, when it filed United States Patent application number 20,080,082,393 for a system of "personal data mining". Having been fed personal data provided by users themselves or gathered by third parties, the technology would then analyse it to "enable identification of opportunities and/or provisioning of recommendations to increase user productivity and/or improve quality of life". You can decide for yourself whether you trust Redmond with your lifelog, but it's hard to fault the premise: the personal data mine, the patent states, would be a way "to identify relevant information that otherwise would likely remain undiscovered".
Both I as a citizen and society as a whole would gain if individuals' personal datastreams could be mined to extract patterns upon which we could act. Such mining would turn my raw data into predictive information that can anticipate my mood and improve my efficiency, make me healthier and more emotionally intuitive, reveal my scholastic weaknesses and my creative strengths. I want to find the hidden meanings, the unexpected correlations that reveal trends and risk factors of which I had been unaware. In an era of oversharing, we need to think more about data-driven self-discovery.
A small but fast-growing self-tracking movement is already showing the potential of such thinking, inspired by Kevin Kelly's quantified self and Gary Wolf's data-driven life. With its mobile sensors and apps and visualisations, this movement is tracking and measuring exercise, sleep, alertness, productivity, pharmaceutical responses, DNA, heartbeat, diet, financial expenditure — and then sharing and displaying its findings for greater collective understanding. It is using its tools for clustering, classifying and discovering rules in raw data, but mostly is simply quantifying that data to extract signals — information — from the noise.
The cumulative rewards of such thinking will be altruistic rather than narcissistic, whether in pooling personal data for greater scientific understanding (23andMe) or in propagating user-submitted data to motivate behaviour change in others (Traineo). Indeed, as the work of Daniel Kahneman, Daniel Gilbert, and Christakis and Fowler demonstrate so powerfully, accurate individual-level data-tracking is key to understanding how human happiness can be quantified, how our social networks affect our behaviour, how diseases spread through groups.
The data is already out there. We just need to encourage people to tap it, share it, and corral it into knowledge.
The Culture Cycle
Davis-Brack Professor in the Behavioral Sciences at Stanford University...
Pundits now invoke culture to explain all manner of tragedies and triumphs, from why a disturbed young man opens fire on a politician, to why African-American children struggle in school, to why the United States can't establish democracy in Iraq, to why Asian factories build better cars. A quick click through a single morning's media, for example, yields the following catch: gun culture, Twitter culture, ethical culture, Arizona culture, always-on culture, winner-take-all culture, culture of violence, culture of fear, culture of sustainability, culture of corporate greed.
Yet no one explains what, exactly, culture is, how it works, or how to change it for the better.
A cognitive tool that fills this gap is the culture cycle, a tool that not only simply describes how culture works, but also clearly prescribes how to make lasting change. The culture cycle is the iterative, recursive process by which 1) people create the cultures to which they later adapt, and 2) cultures shape people so that they act in ways that perpetuate their cultures. In other words, cultures and people (and some other primates) make each other up. This process involves four nested planes: individual selves (their thoughts, feelings, and actions); the everyday practices and artifacts that reflect and shape those selves; the institutions (such as education, law, and media) that afford or discourage certain everyday practices and artifacts; and pervasive ideas about what is good, right, and human that both influence and are influenced by all these levels. (See figure below). The culture cycle rolls for all types of social distinctions, from the macro (nation, race, ethnicity, region, religion, gender, social class, generation, etc.) to the micro (occupation, organization, neighborhood, hobby, genre preference, family, etc.)
One consequence of the culture cycle is that no action is caused by eitherindividual psychological features or external influences. Both are always at work. Just as there is no such thing as a culture without agents, there are no agents without culture. Humans are culturally-shaped shapers. And so, for example, in the case of a school shooting it is overly simplistic to ask whether the perpetrator shot because of either a mental illness or because of his interactions with a hostile and bullying school climate, or with a particularly deadly cultural artifact (i.e., a gun), or with institutions that encourage that climate and allow access to that artifact, or with pervasive ideas and images that glorify resistance and violence. The better question, and the one that the culture cycle requires, is how do these four levels of forces interact? Indeed, researchers at the vanguard of public health contend that neither social stressors nor individual vulnerabilities are enough to produce most mental illnesses. Instead, the interplay of biology and culture, of genes and environments, of nature and nurture is responsible for most psychiatric disorders.
Social scientists succumb to another form of this oppositional thinking. For example, in the face of Hurricane Katrina, thousands of poor African-American residents "chose" not to evacuate the Gulf Coast, to quote most news accounts. More charitable social scientists had their explanations ready, and struggled to get their variables into the limelight. Of course they didn't leave, said the psychologists, because poor people have an external locus of control, low intrinsic motivation, or low self-efficacy. Of course they didn't leave, said the sociologists and political scientists, because their cumulative lack of access to adequate income, banking, education, transportation, healthcare, police protection, and basic civil rights makes staying put is their only option. Of course they didn't leave, said the anthropologists, because their kin networks, religious faith, and historical ties held them there. Of course they didn't leave, said the economists, because they didn't have the material resources, knowledge, or financial incentives to get out.
The irony in the interdisciplinary bickering is that everyone is mostly right. But they are right in the same way that the blind men touching the elephant in the Indian proverb are right: the failure to integrate each field's contributions makes everyone wrong and, worse, not very useful.
The culture cycle captures how these different levels of analyses relate to each other. Granted, our four-level process explanation is not as zippy as the single-variable accounts that currently dominate most public discourse. But it's far simpler and accurate than the standard "it's complicated" and "it depends" answers that more thoughtful experts often supply.
Moreover, built into the culture cycle are the instructions for how to reverse engineer it: a sustainable change at one level usually requires change at all four levels. There are no silver bullets. The ongoing U.S. Civil Rights Movement, for example, requires the opening of individual hearts and mind; and the mixing of people as equals in daily life, along with media representations thereof; and the reform of laws and policies; and fundamental revision of our nation's idea of what a good human being is.
Just because people can change their cultures, however, does not mean that they can do so easily. A major obstacle is that most people don't even realize that they have cultures. Instead, they think that they are standard-issue humans—they are normal; it's all those other people who are deviating from the natural, obvious and right way to be.
Yet we are all part of multiple culture cycles. And we should be proud of that fact, for the culture cycle is our smart human trick. Because of it, we don't have to wait for mutation or natural selection to allow us to range farther over the face of the earth, to extract nutrition from a new food source, or to cope with a change in climate. And as modern life becomes more complex, and social and environmental problems become more widespread and entrenched, people will need to understand and use the culture cycle more skillfully.
Post-doctoral fellow, Mind/Brain/Behavior Interfaculty Initiative, Harvard University...
We are shockingly ignorant of the causes of our own behavior. The explanations that we provide are sometimes wholly fabricated, and certainly never complete. Yet, that is not how it feels. Instead it feels like we know exactly what we're doing and why. This is confabulation: Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties. Every year psychologists use dramatic examples to entertain their undergraduate audiences. Confabulation is funny, but there is a serious side, too. Understanding it can help us act better and think better in everyday life.
Some of the most famous examples of confabulation come "split-brain" patients, whose left and right brain hemispheres have been surgically disconnected for medical treatment. Neuroscientists have devised clever experiments in which information is provided to the right hemisphere (for instance, pictures of naked people), causing a change in behavior (embarrassed giggling). Split-brain individuals are then asked to explain their behavior verbally, which relies on the left hemisphere. Realizing that their body is laughing, but unaware of the nude images, the left hemisphere will confabulate an excuse for the body's behavior ("I keep laughing because you ask such funny questions, Doc!").
Wholesale confabulations in neurological patients can be jaw-dropping, but in part that is because they do not reflect ordinary experience. Most of the behaviors that you or I perform are not induced by crafty neuroscientists planting subliminal suggestions in our right hemisphere. When we are outside the laboratory — and when our brains have all the usual connections — most behaviors that we perform are the product of some combination of deliberate thinking and automatic action.
Ironically, that is exactly what makes confabulation so dangerous. If we routinely got the explanation for our behavior totally wrong — as completely wrong as split-brain patients sometimes do — we would probably be much more aware that there are pervasive, unseen influences on our behavior. The problem is that we get all of our explanations partly right, correctly identifying the conscious and deliberate causes of our behavior. Unfortunately, we mistake "party right" for "completely right", and thereby fail to recognize the equal influence of the unconscious, or to guard against it.
A choice of job, for instance, depends partly on careful deliberation about career interests, location, income, and hours. At the same time, research reveals that choice to be influenced by a host of factors of which we are unaware. People named Dennis or Denise are more likely to be dentists, while people named Virginia are more likely to locate to (you guessed it) Virginia. Less endearingly, research suggests that on average people will take a job with fewer benefits, a longer commute and a smaller income if it allows them to avoid having a female boss. Surely most people do not want to choose a job based on the sound of their name, nor do they want to sacrifice job quality in order to perpetuate old gender norms. Indeed, most people have no awareness that these factors influence their own choices. When you ask them why they took the job, they are likely to reference their conscious thought processes: "I've always loved making ravioli, the Lira is on the rebound and Rome is for lovers…" That answer is partly right, but it is also partly wrong, because it misses the deep reach of automatic processes on human behavior.
People make harsher moral judgments in foul-smelling rooms, reflecting the role of disgust as a moral emotion. Women are less likely to call their fathers (but equally likely to call their mothers) during the fertile phase of their menstrual cycle, reflecting a means of incest avoidance. Students indicate greater political conservatism when polled near a hand-sanitizing station during a flu epidemic, reflecting the influence of a threatening environment on ideology. They also indicate a closer bond to their mother when holding hot coffee versus iced coffee, reflecting the metaphor of a "warm" relationship.
Automatic behaviors can be remarkably organized, and even goal-driven. For example, research shows that people tend to cheat just as much as they can without realizing that they are cheating. This is a remarkable phenomenon: Part of you is deciding how much to cheat, calibrated at just the level that keeps another part of you from realizing it.
One of the ways that people pull off this trick is with innocent confabulations: When self-grading an exam, students think, "Oh, I was going to circle e, I really knew that answer!" This isn't a lie, any more than it's a lie to say you have always loved your mother (latte in hand), but don't have time to call your dad during this busy time of the month. These are just incomplete explanations, confabulations that reflect our conscious thoughts while ignoring the unconscious ones.
This brings me to the central point, the part that makes confabulation an important concept in ordinary life and not just a trick pony for college lectures. Perhaps you have noticed that people have an easier time sniffing out unseemly motivations for other's behavior than recognizing the same motivations for their own behavior. Others avoided female bosses (sexist) and inflated their grades (cheaters), while we chose Rome and really meant to say that Anne was the third Brontë. There is a double tragedy in this double standard.
First, we jump to the conclusion that others' behaviors reflect their bad motives and poor judgment, attributing conscious choice to behaviors that may have been influenced unconsciously. Second, we assume that our own choices were guided solely by the conscious explanations that we conjure, and reject or ignore the possibility of our own unconscious biases.
By understanding confabulation we can begin to remedy both faults. We can hold others responsible for their behavior without necessarily impugning their conscious motivations. And, we can hold ourselves more responsible by inspecting our own behavior for its unconscious influences, as unseen as they are unwanted.
Assistant Professor, Neuroscience, Baylor College of Medicine; Author, Sum...
In 1909, the biologist Jakob von Uexküll introduced the concept of theumwelt. He wanted a word to express a simple (but often overlooked) observation: different animals in the same ecosystem pick up on different environmental signals. In the blind and deaf world of the tick, the important signals are temperature and the odor of butyric acid. For the black ghost knifefish, it's electrical fields. For the echolocating bat, it's air-compression waves. The small subset of the world that an animal is able to detect is its umwelt. The bigger reality, whatever that might mean, is called the umgebung.
The interesting part is that each organism presumably assumes its umwelt to be the entire objective reality "out there." Why would any of us stop to think that there is more beyond what we can sense? In the movie The Truman Show, the eponymous Truman lives in a world completely constructed around him by an intrepid television producer. At one point an interviewer asks the producer, "Why do you think Truman has never come close to discovering the true nature of his world?" The producer replies, "We accept the reality of the world with which we're presented." We accept our umwelt and stop there.
To appreciate the amount that goes undetected in our lives, imagine you're a bloodhound dog. Your long nose houses two hundred million scent receptors. On the outside, your wet nostrils attract and trap scent molecules. The slits at the corners of each nostril flare out to allow more air flow as you sniff. Even your floppy ears drag along the ground and kick up scent molecules. Your world is all about olfaction. One afternoon, as you're following your master, you stop in your tracks with a revelation. What is it like to have the pitiful, impoverished nose of a human being? What can humans possibly detect when they take in a feeble little noseful of air? Do they suffer a hole where smell is supposed to be?
Obviously, we suffer no absence of smell because we accept reality as it's presented to us. Without the olfactory capabilities of a bloodhound, it rarely strikes us that things could be different. Similarly, until a child learns in school that honeybees enjoy ultraviolet signals and rattlesnakes employ infrared, it does not strike her that plenty of information is riding on channels to which we have no natural access. From my informal surveys, it is very uncommon knowledge that the part of the electromagnetic spectrum that is visible to us is less than a ten-trillionth of it.
Our unawareness of the limits of our umwelt can be seen with color blind people: until they learn that others can see hues they cannot, the thought of extra colors does not hit their radar screen. And the same goes for the congenitally blind: being sightless is not like experiencing "blackness" or "a dark hole" where vision should be. As a human is to a bloodhound dog, a blind person does not miss vision. They do not conceive of it. Electromagnetic radiation is simply not part of their umwelt.
The more science taps into these hidden channels, the more it becomes clear that our brains are tuned to detect a shockingly small fraction of the surrounding reality. Our sensorium is enough to get by in our ecosystem, but is does not approximate the larger picture.
I think it would be useful if the concept of the umwelt were embedded in the public lexicon. It neatly captures the idea of limited knowledge, of unobtainable information, and of unimagined possibilities. Consider the criticisms of policy, the assertions of dogma, the declarations of fact that you hear every day — and just imagine if all of these could be infused with the proper intellectual humility that comes from appreciating the amount unseen.
Diversity is Universal
Assistant Professor of Psychology at Northwestern University...
At every level in the vast and dynamic world of living things lies diversity. From biomes to biomarkers, the complex array of solutions to the most basic problems regarding survival in a given environment afforded to us by nature is riveting. In the world of humans alone, diversity is apparent in the genome, in the brain and in our behavior.
The mark of multiple populations lies in the fabric of our DNA. The signature of selfhood in the brain holds dual frames, one for thinking about one's self as absolute, the other in context of others. From this biological diversity in humans arises cultural diversity directly observable in nearly every aspect of how people think, feel and behavior. From classrooms to conventions across continents, the range and scope of human activities is stunning.
Recent centuries have seen the scientific debate regarding the nature of human nature cast as a dichotomy between diversity on the one hand and universalism on the other. Yet a seemingly paradoxical, but tractable, scientific concept that may enhance our cognitive toolkit over time is the simple notion that diversity is universal.
Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational......
I think the scientific concept that would improve everybody's cognitive toolkit the most is "scientific concept".
Despite spectacular success in research, I feel that our global scientific community has been nothing short of a spectacular failure when it comes to educating the public. Haitians burned 12 "witches" in 2010. In the US, recent polls show that 39% consider astrology scientific, and 40% believe that our human species is less than 10,000 years old. If everyone understood the concept of "scientific concept", these percentages would be zero.
Moreover, the world would be a better place, since people with a scientific lifestyle, basing their decisions on correct information, maximize their chances of success. By making rational buying and voting decisions, they also strengthen the scientific approach to decision-making in companies, organizations and governments.
Why have we scientists failed so miserably? I think the answers lie mainly in psychology, sociology and economics.
A scientific lifestyle requires a scientific approach to both gatherininformation and using information, and both have their pitfalls.
You're clearly more likely to make the right choice if you're aware of the full spectrum of arguments before making your mind up, yet there are many reasons why people don't get such complete information. Many lack access to it (3% of Afghans have internet, and in a 2010 poll, 92% didn't know about the 9/11 attacks).
Many are too swamped with obligations and distractions to seek it. Many seek information only from sources that confirm their preconceptions. The most valuable information can be hard to find even for those who are online and uncensored, buried in an unscientific media avalanche.
Then there's what we do with the information we have. The core of a scientific lifestyle is to change your mind when faced with information that disagrees with your views, avoiding intellectual inertia, yet many laud leaders stubbornly sticking to their views as "strong". The great physicist Richard Feynman hailed "distrust of experts" as a cornerstone of science, yet herd mentality and blind faith in authority figures is widespread. Logic forms the basis of scientific reasoning, yet wishful thinking, irrational fears and other cognitive biases often dominate decisions.
So what can we do to promote a scientific lifestyle?
The obvious answer is improving education. In some countries, having even the most rudimentary education would be a major improvement (less than half of all Pakistanis can read). By undercutting fundamentalism and intolerance, it would curtail violence and war.
By empowering women, it would curb poverty and the population explosion. However, even countries that offer everybody education can make major improvements.
All too often, schools resemble museums, reflecting the past rather than shaping the future. The curriculum should shift from one watered down by consensus and lobbying to skills our century needs, for relationships, health, contraception, time management, critical thinking and recognizing propaganda. For youngsters, learning a global language and typing should trump long division and writing cursive. In the internet age, my own role as a classroom teacher has changed. I'm no longer needed as a conduit of information, which my students can simply download on their own. Rather, my key role is inspiring a scientific lifestyle, curiosity and desire to learn more.
Now let's get to the most interesting question: how can we really make a scientific lifestyle take root and flourish?
Reasonable people have been making similar arguments for better education since long before I was in diapers, yet rather than improving, education and adherence to a scientific lifestyle is arguably deteriorating further in many countries, including the US. Why? Clearly because there are powerful forces pushing back in the opposite direction, and they are pushing more effectively. Corporations concerned that a better understanding of certain scientific issues would harm their profits have an incentive to muddy the waters, as do fringe religious groups concerned that questioning their pseudoscientific claims would erode their power.
So what can we do? The first thing we scientists need to do is get off our high horses, admit that our persuasive strategies have failed, and develop a better strategy. We have the advantage of having the better arguments, but the anti-scientific coalition has the advantage of better funding.
However, and this is painfully ironic, it is also more scientifically organized! If a company wants to change public opinion to increase their profits, it deploys scientific and highly effective marketing tools. What do people believe today? What do we want them to believe tomorrow? Which of their fears, insecurities, hopes and other emotions can we take advantage of? What's the most cost-effective way of changing their mind? Plan a campaign. Launch. Done.
Is the message oversimplified or misleading? Does it unfairly discredit the competition? That's par for the course when marketing the latest smartphone or cigarette, so it would be naive to think that the code of conduct should be any different when this coalition fights science.
Yet we scientists are often painfully naive, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based of what scientific argument wil it make a hoot of a difference if we grumble "we won't stoop that low" and "people need to change" in faculty lunch rooms and recite statistics to journalists?
We scientists have basically been saying "tanks are unethical, so let's fight tanks with swords".
To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically:
We need new science advocacy organizations which use all the same scientific marketing and fundraising tools as the anti-scientific coalition.
We'll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.
We won't need to stoop all the way down to intellectual dishonesty, however. Because in this battle, we have the most powerful weapon of all on our side: the facts.
An Instinct to Learn
Department of Cognitive Biology, University of Vienna; Author, The Evolution of......
One of the most pernicious misconceptions in cognitive science is the belief in a dichotomy between nature and nurture. Many psychologists, linguists and social scientists, along with the popular press, continue to treat nature and nurture as combatting ideologies, rather than complementary perspectives. For such people, the idea that something isboth "innate" and "learned", or both "biological" and "cultural", is an absurdity. Yet most biologists today recognize that understanding behavior requires that we understand the interaction between inborn cognitive processes (e.g. learning and memory) and individual experience. This is particularly true in human behaviour, since the capacities for language and culture are some of the key adaptations of our species, and involve irreducible elements of both biology and environment, of both nature and nurture.
The antidote to "nature versus nurture" thinking is to recognize the existence, and importance, of "instincts to learn". This phrase was introduced by Peter Marler, one of the fathers of birdsong research. A young songbird, while still in the nest, eagerly listens to adults of its own species sing. Months later, having fledged, it begins singing itself, and shapes its own initial sonic gropings to the template provided by those stored memories. During this period of "subsong" the bird gradually refines and perfects its own song, until by adulthood it is ready to defend a territory and attract mates with its own, perhaps unique, species-typical song.
Songbird vocal learning is the classic example of an instinct to learn. The songbird's drive to listen, and to sing, and to shape its song to that which it heard, is all instinctive. The bird needs no tutelage, nor feedback from its parents, to go through these stages. Nonetheless, the actual song that it sings is learned, passed culturally from generation to generation. Birds have local dialects, varying randomly from region to region. If the young bird hears no song, it will produce only an impoverished squawking, not a typical song.
Importantly, this capacity for vocal learning is only true of some birds, like songbirds and parrots. Other bird species, like seagulls, chickens or owls, do not learn their vocalizations: rather, their calls develop reliably in the absence of any acoustic input. The calls of such birds are truly instinctive, rather than learned. But for those birds capable of vocal learning, the song that an adult bird sings is the result of a complex interplay between instinct (to listen, to rehearse, and to perfect) and learning (matching the songs of adults of its species).
It is interesting, and perhaps surprising, to realize that most mammals do not have a capacity for complex vocal learning of this sort. Current research suggests that, aside from humans, only marine mammals (whales, dolphins, seals…), bats, and elephants have this ability. Among primates, humans appear to be the only species that can hear new sounds in the environment, and then reproduce them. Our ability to do this seems to depend on a babbling stage during infancy, a period of vocal playfulness that is as instinctual as the young bird's subsong. During this stage, we appear to fine tune our vocal control so that, as children, we can hear and reproduce the words and phrases of our adult caregivers.
So is human language an instinct, or learned? The question, presupposing a dichotomy, is intrinsically misleading. Every word that any human speaks, in any of our species' 6000 languages, has been learned. And yet the capacity to learn that language is a human instinct, something that every normal human child is born with, and that no chimpanzee or gorilla possesses.
The instinct to learn language is, indeed, innate (meaning simply that it reliably develops in our species), even though every language is learned. As Darwin put it in Descent of Man, "language is an art, like brewing or baking; but … certainly is not a true instinct, for every language has to be learnt. It differs, however, widely from all ordinary arts, for man has an instinctive tendency to speak, as we see in the babble of our young children; whilst no child has an instinctive tendency to brew, bake, or write."
And what of culture? For many, human culture seems the very antithesis of "instinct". And yet it must be true that language plays a key role in every human culture. Language is the primary medium for the passing on of historically-accumulated knowledge, tastes, biases and styles that makes each of our human tribes and nations its own unique and precious entity. And if human language is best conceived of as an instinct to learn, why not culture itself?
The past decade has seen a remarkable unveiling of our human genetic and neural makeup, and the coming decade promises even more remarkable breakthroughs. Each of us six billion humans is genetically unique (with the fascinating exception of identical twins). For each of us, our unique genetic makeup influences, but does not determine, what we are.
If we are to grapple earnestly and effectively with the reality of human biology and genetics, we will need to jettison outmoded dichotomies like the traditional distinction between nature and nurture. In their place, we will need to embrace the reality of the many instincts to learn (language, music, dance, culture…) that make us human.
I conclude that the dichotomy-denying phrase "instinct to learn" deserves a place in the cognitive toolkit of everyone who hopes, in the coming age of individual genomes, to understand human culture and human nature in the context of human biology. Human language, and human culture, are not instincts — but they are instincts to learn.
Cognitive Neuroscientist and Philosopher, Harvard University...
There's a lot of stuff in the world: trees, cars, galaxies, benzene, the Baths of Caracalla, your pancreas, Ottawa, ennui, Walter Mondale. How does it all fit together? In a word… Supervenience. (Pronounced soo-per-VEEN-yence. The verb form is to supervene.)
Supervenience is a shorthand abstraction, native to Anglo-American philosophy, that provides a general framework for thinking about how everything relates to everything else. The technical definition of supervenience is somewhat awkward:
Supervenience is a relationship between two sets of properties. Call them Set A and Set B. The Set A properties supervene on the Set B properties if and only if no two things can differ in their A properties without also differing in their B properties.
This definition, while admirably precise, makes it hard to see what supervenience is really about, which is the relationships among different levels of reality. Take, for example, a computer screen displaying a picture. At a high level, at the level of images, a screen may depict an image of a dog sitting in a rowboat, curled up next to a life vest. The screen's content can also be described as an arrangement of pixels, a set of locations and corresponding colors. The image supervenes on the pixels. This is because a screen's image-level properties (its dogginess, its rowboatness) cannot differ from another screen's image-level properties unless the two screens also differ in their pixel-level properties.
The pixels and the image are, in a very real sense, the same thing. But — and this is key — their relationship is asymmetrical. The image supervenes on the pixels, but the pixels do not supervene on the image. This is because screens can differ in their pixel-level properties without differing in their image-level properties. For example, the same image may be displayed at two different sizes or resolutions. And if you knock out a few pixels, it's still the same image. (Changing a few pixels will not protect you from charges of copyright infringement.) Perhaps the easiest way to think about the asymmetry of supervenience is in terms of what determines what. Determining the pixels completely determines the image, but determining the image does not completely determine the pixels.
The concept of supervenience deserves wider currency because it allows us to think clearly about many things, not just about images and pixels. Supervenience explains, for example, why physics is the most fundamental science and why the things that physicists study are the most fundamental things. To many people, this sounds like a value judgment, but it's not, or need not be. Physics is fundamental because everything in the universe, from your pancreas to Ottawa, supervenes on physical stuff. (Or so "physicalists" like me claim.) If there were a universe physically identical to ours, then it would also include a pancreas just like yours and an Ottawa just like Canada's.
Supervenience is especially helpful when grappling with three contentious and closely related issues: (1) the relationship between science and the humanities, (2) the relationship between the mind and brain, and (3) the relationship between facts and values.
Humanists sometimes perceive science as imperialistic, as aspiring to take over the humanities, to "reduce" everything to electrons, genes, numbers, and neurons, and thus to "explain away" all of the things that make life worth living. Such thoughts are accompanied by disdain or fear, depending on how credible such ambitions are taken to be. Scientists, for their part, sometimes are imperious, dismissing humanists and their pursuits as childish and unworthy of respect. Supervenience can help us think about how science and the humanities fit together, why science is sometimes perceived as encroaching on the humanist's territory, and the extent to which such perceptions are and are not valid.
It would seem that humanists and scientists study different things. Humanists are concerned with things like love, revenge, beauty, cruelty, and our evolving conceptions of such things. Scientists study things like electrons and nucleotides. But sometimes it sounds like scientists are getting greedy. Physicists aspire to construct a complete physical theory, which is sometimes called a "Theory of Everything" (TOE). If humanists and scientists study different things, and if physics covers everything, then what is left for the humanists? (Or, for that matter, non-physicists?)
There is a sense in which a TOE really is a TOE, and there is a sense in which it's not. A TOE is a complete theory of everything upon which everything else supervenes. If two worlds are physically identical, then they are also humanistically identical, containing exactly the same love, revenge, beauty, cruelty, and conceptions thereof. But that does not mean that a TOE puts all other theorizing out of business, not by a long shot. A TOE won't tell you anything interesting about Macbeth or the Boxer Rebellion.
Perhaps the threat from physics was never all that serious. Today, the real threat, if there is one, is from the behavioral sciences, especially the sciences that connect the kind of "hard" science we all studied in high school to humanistic concerns. In my opinion, three sciences stand out in this regard: behavioral genetics, evolutionary psychology, and cognitive neuroscience. I study moral judgment, a classically humanistic topic. I do this in part by scanning people's brains while they make moral judgments. More recently I've started looking at genes, and my work is guided by evolutionary thinking. My work assumes that the mind supervenes on the brain, and I attempt to explain human values — for example the tension between individual rights and the greater good — in terms of competing neural systems.
I can tell you from personal experience that this kind of work makes some humanists uncomfortable. During the discussion following a talk I gave at Harvard's Humanities Center, a prominent professor declared that my talk — not any particular conclusion I'd drawn, but the whole approach — made him physically ill. (Of course, this could just be me!)
The subject matter of the humanities has always supervened on the subject matter of the physical sciences, but in the past a humanist could comfortably ignore the subvening physical details, much as an admirer of a picture can ignore the pixel-level details. Is that still true? Perhaps it is. Perhaps it depends on one's interests. In any case, it's nothing to be worried sick about.
NB: Andrea Heberlein points out that "supervenience" may also refer to exceptional levels of convenience, as in, "New Chinese take-out right around the corner — Supervenient!"
Duality and World Piece
Associate professor of physics at Haverford College...
In the northeast Bronx, I walk through a neighborhood that I once feared going into, this time with a big smile on my face. This is because I can quell the bullies with a new slang word in our dictionary "dual". As I approach the 2-train stop on East 225st , the bullies await me. I say, "Yo, whats the dual?" The bullies embrace me with a pound followed by a high five. I make my train.
In physics one of the most beautiful yet underappreciated ideas is that of duality. A duality allows us to describe a physical phenomenon from two different perspectives; often a flash of creative insight is needed to find both. However the power of the duality goes beyond the apparent redundancy of description. After all, why do I need more than one way to describe the same thing? There are examples in physics where either description of the phenomena fails to capture its entirety. Properties of the system 'beyond' the individual descriptions 'emerge'. I will provide two beautiful examples of how dualities manage to yield 'emergent' properties and, end with a speculation.
Most of us know about the famous wave-particle duality in quantum mechanics, which allowes the photon (and the electron) to attain their magical properties to explain all of the wonders of atomic physics and chemical bonding. The duality states that matter (such as the electron) has both wave-like and particle like properties depending on the context. What's weird is how quantum mechanics manifests the wave-particle duality. According to the traditional Copenhagen interpretation, the wave is a travelling oscillation of possibility that the electron can be realized omewhere as a particle.
Life gets strange in the example of quantum tunneling where the electron can penetrate a barrier only because of its 'wave-like' property. Classical physics tells us that an object will not surmount a barrier (like a hill) if its total kinetic energy is less than the potential energy of the barrier. However quantum mechanics predicts that particles can penetrate (or tunnel) through a barrier even when the kinetic energy is less than the potential energy of the barrier. This effect is used every time you use a flash drive or a CD player!
Most people assume that the conduction of electrons in a metal, is a well understood property of classical physics. But when we look deeper we realize that conduction happens because of the wave-like nature of the electrons. We call the collective electron waves that move through the periodic lattice of a metal a Bloch-wave. Qualitatively, when the electron's Bloch waves constructively interfere we get conduction. Moreover, the wave-particle duality takes us further to predict superconductivity, how it is that electrons (and other spin ½ particles like quarks) can conduct without resistance.
Nowadays in my field of quantum gravity and relativistic cosmology, theorists are exploiting another type of duality to address unresolved questions. This holographic duality was pioneered by Leonard Susskind and Gerhard 't Hooft, and later it found a home in the form of the AdS/CFT duality by Juan Maldacena.
This posits that the phenomenon of quantum gravity is described on one hand by a ordinary gravitational theory (a beefed up version of Einstein's general relativity). On the other hand a dual description of quantum gravity is described by a non-gravitational physics with a space-time of one lower dimension. We are left to wonder in the spirit of the wave-particle duality, what new physics we would glean from this type of duality.
The holographic duality also seems to persist in other approaches of quantum gravity, such as Loop Quantum Gravity, and researchers are still in exploring the true meaning behind holography and potential predictions for experiments.
Dualities seem to allow us to understand and make use of properties in physics that go beyond a singular lense of analysis. Might we wonder if duality can transcend its role in physics and into other fields? The dual of time will tell.
The Veeck Effect
Consultant in adaptive optics and an adjunct professor of anthropology at the......
There's an invidious rhetorical strategy that we've all seen — and I'm afraid that most of us have inflicted it on others as well. I call it the Veeck effect (of the first kind) — it occurs whenever someone adjusts the standards of evidence in order to favor a preferred outcome.
Why Veeck? Bill Veeck was a flamboyant baseball owner and promoter.
In his autobiography — (Veeck — As in Wreck) he described installing a flexible fence in the right field of the Milwaukee Brewers. At first he only put the fence up when facing a team full of power hitters, but eventually he took it to the limit, moving the fence up when the visitors were at bat and down when his team was.
The history of science is littered with flexible fences. The phlogiston theory predicted that phlogiston would be released when magnesium burned. It looked bad for that theory when experiments showed that burning magnesium became heavier — but its supporters happily explained that phlogiston had negative weight.
Consider Kepler. He came up with the idea that the distances of the six (known) planets could be explained by nesting the five Platonic solids. It almost worked for Earth, Mars, and Venus, but clearly failed for Jupiter. He dismissed the trouble with Jupiter, saying "nobody will wonder at it, considering the great distance". The theory certainly wouldn't have worked with any extra planets, but fortunately for Kepler's peace of mind, Uranus was discovered well after his death.
The Veeckian urge is strong in every field, but it truly flourishes in the human and historical sciences, where the definitive experiments that would quash such nonsense are often impossible, impractical, or illegal. Nowhere is this tendency stronger than among cultural anthropologists, who at times seem to have no reason for being other than refurbishing the reputations of cannibals.
Sometimes this has meant denying a particular case of cannibalism, for example among the Anasazi in the American Southwest. Evidence there has piled up and up -archaeologists have found piles of human bones with muscles scraped off, split open for marrow, polished by stirring in pots. They have even found human feces with traces of digested human tissue. But that's not good enough. For one thing, this implication of ancient cannibalism among the Anasazi is offensive to their Pueblo descendants, and that somehow trumps mounds of bloody evidence. You would think that the same principle would cause cultural anthropologists to embrace the face-saving falsehoods of other ethnic groups - didn't the South really secede over the tariff? But that doesn't seem to happen.
Some anthropologists have carried the effort further, denying that any culture was ever cannibalistic. They don't just deny Anasazi archaeology — they deny every kind of evidence, from archaeology to historical accounts, even reports from people alive today. When Álvaro de Mendaña discovered the Solomon Islands, he reported that a friendly chieftain threw a feast and offered him a quarter of a boy. Made up, surely. The conquistadors described the Aztecs as a cannibal kingdom — can't be right, even if the archeology supports it. When Papuans in Port Moresby volunteered to have a picnic in the morgue — to attract tourists, of course — they were just showing public spirit.
The Quaternary mass extinction, which wiped out much of the world's megafauna, offers paleontologists a chance to crank up their own fences. The large marsupials, flightless birds and reptiles of Australia disappeared shortly after humans arrived, about 50,000 years ago. The large mammals of North and South America disappeared about 10,000 years ago — again, just after humans showed up. Moas disappear within two centuries after Polynesian colonization in New Zealand, while giant flightless birds and lemurs disappeared from Madagascar shortly after humans arrived. What does this pattern suggest as the cause? Why, climate change, of course. Couldn't be human hunters — that's unpossible!
The Veeck effect is even more common in everyday life than it is in science. It's just that we expect more from scientists. But scientific examples are clear-cut, easy to see, and understanding the strategy helps you avoid succumbing to it.
Whenever some Administration official says that absence of evidence is not evidence of absence — whenever a psychiatrist argues that Freudian psychotherapy works for some people, even if proven useless on average — Bill Veeck's spirit goes marching on.
*If you're wondering about the second Veeck effect, it's the intellectual equivalent of putting a midget up to bat. And that's another essay.
Science Writer; Consultant; Lecturer, Copenhagen; Author, The Generous Man...
Depth is what you do not see immediately at the surface of things. Depth is what is below that surface: a body of water below the surface of a lake, the rich life of a soil below the dirt or the spectacular line of reasoning behind a simple statement.
Depth is a straightforward aspect of the physical world. Gravity stacks stuff and not everything can be at the top. Below there is more and you can dig for it.
Depth acquired a particular meaning with the rise of complexity science a quarter of a century ago: What is characteristic of something complex? Very orderly things like crystals are not complex. They are simple. Very messy things like a pile of litter are very difficult to describe: They hold a lot of information. Information is a measure of how difficult something is to describe. Disorder has a high information content and order has a low one. All the interesting stuff in life is in-between: Living creatures, thoughts and conversations. Not a lot of information, but neither a little. So information content does not lead us to what is interesting or complex. The marker is rather the information that is not there, but was somehow involved in creating the object of interest. The history of the object is more relevant than the object itself, if we want to pin-point what is interesting to us.
It is not the informational surface of the thing, but its informational depth that attracts our curiosity. It took a lot to bring it here, before our eyes. It is not what is there, but what used to be there, that matters. Depth is about that.
The concept of depth in complexity science was expressed in different ways: You could talk about the actual amount of physical information that was involved in bringing about something — the thermodynamic depth — or the amount of computation it took to arrive at a result— the logical depth. Both express the notion that the process behind is more important than the eventual product.
This idea can also be applied to human communication.
When you say "yes" at a wedding it (hopefully) re-presents a huge amount of conversation, coexistence and fun that you have had with that other person present. And a lot of reflection upon it. There is not a lot of information in the "yes" (one bit, actually), but the statement has depth. Most conversational statements have some kind of depth: There is more than meets the ear, something that happened between the ears of the person talking — before a statement was made. When you understand the statement, the meaning of what is being said, you "dig it", you get the depth, what is below and behind. What is not said, but meant — the exformation content, information processed and thrown away before the actual production of explicit information.
2 + 2 = 4. This is a simple computation. The result, 4, hold less information than the problem, 2 + 2 (essentially because the problem could also have been 3 + 1 and yet the result would still be 4). Computation is wonderful as a method for throwing away information, getting rid of it. You do computations to ignore all the details, to get an overview, an abstraction, a result.
What you want is a way to distinguish between a very deep "yes" and a very shallow one: Did the guy actually think about what he said? Was the result 4 actually the result of a meaningful calculation? Is there in fact water below that surface? Does it have depth?
Most human interaction is about that question: Is this bluff or for real? Is there sincere depth in the affection? Does the result stem from intense analysis or is it just an estimate? Is there anything between the lines?
Signaling is all about this question: fake or depth? In biology the past few decades have seen the rise of studies of how animals prove to each other that there is depth behind the signal. The handicap principle of sexual selection is about a way to prove that you signal has depth: If a peacock has long, spectacular feathers it proves that it can survive its predators even though the fancy plumage represents a disadvantage, a handicap. Hence, the peahen can know that the individual displaying the huge tail is a strong one, or else it could not survive with that extreme tail.
Amongst humans you have what economists call costly signals: Ways to show that you have something of value. The phenomenon conspicuous consumption was observed by sociologist Thorstein Veblen already in 1899: If you want to prove that you have a lot of money, you have to waste them. That is: Use them in a way that is absurd and idiotic, because only the rich guy can do so. But do it conspicuously, so that other people will know. Waste is a costly signal of the depth of pile of money. Poor people have to use their money in functional way.
Handicaps, costly signals, intense eye contact and rhetorical gestures are all about proving that what seems so simple really has a lot of depth.
That is also the point with abstractions: We want them to be shorthand for a lot of information that was digested in the process leading to the use of the abstraction, but is not present when we use it. Such abstractions have depth. We love them. Other abstraction have no depth. They are shallow and just used to impress the other guy. They do not help us. We hate them.
Intellectual life is very much about the ability to distinguish between the shallow and the deep abstractions. You need to know if there is any depth before you make that headlong dive and jump into it.
Professor of Geography and Earth & Space Sciences, UCLA...
As scientists, we're sympathetic to this question. We've asked it of ourselves before, many times, after fruitless days lost at the lab bench or computer seat. If only our brains could find a new way to process the delivered information faster, to interpret it better, to align the world's
noisy torrents of data in an crystalline moment of clarity. In a word, for our brains to forgo their familiar thought sequences, and innovate.
To be sure, the word "innovate" has become something of a badly overused cliche. Tenacious CEO's, clever engineers, and restless artists come to mind before the methodical, data-obsessed scientist. But how often do we consider the cognitive role of innovation in the supposedly bone-dry world of hypothesis-testing, mathematical constraints and data-dependent empiricism?
In the world of science, innovation stretches the mind to find an explanation when the universe wants to hold on to its secrets just a little longer. This can-do attitude is made all the more valuable, not less, in a world constrained by ultimate barriers like continuity of mass and energy, Absolute zero, or the Clausius-Clapeyron relation. Innovation is a critical enabler of discovery around and of these bounds. It is the occasional architect of that rare, wonderful breakthrough even when the tide of scientific opinion is against you.
A reexamination of this word from the scientific perspective reminds us of the extreme power of this cognitive tool, one that most people possess
already. Through innovation, we all can transcend social, professional, political, scientific, and most importantly, personal limits. Perhaps we might all put it to better and more frequent use.
Life As A Side Effect
Journalist; Author, The Tangled Bank: An Introduction to Evolution; Blogger, The Loom...
It's been over 150 years since Charles Darwin published the Origin of Species, but we still have trouble appreciating the simple, brilliant insight at its core. That is, life's diversity does not exist because it is necessary for living things. Birds did not get wings so that they could fly. We do not have eyes so that we can read. Instead, eyes, wings, and the rest of life's wonder has come about as a side effect of life itself. Living things struggle to survive, they reproduce, and they don't do a perfect job of replicating themselves. Evolution spins off of that loop, like heat coming off an engine. We're so used to seeing agents behind everything that we struggle to recognize life as a side effect. I think everyone would do well do overcome that urge to see agents where there are none. It would even help us to understand why we are so eager to see agents in the first place.
The Snuggle For Existence
Editor of New Scientist magazine...
Everyone is familiar with the struggle for existence. In the wake of the revolutionary work by Charles Darwin we realized that competition is at the very heart of evolution. The fittest win this endless "struggle for life most severe", as he put it, and all others perish. In consequence, every creature that crawls, swims, and flies today has ancestors that once successfully reproduced more often than their unfortunate competitors.
This is echoed in the way that people see life as competitive. Winners take all. Nice guys finish last. We look after number one. We are motivated by self-interest. Indeed, even our genes are said to be selfish.
Yet competition does not tell the whole story of biology.
I doubt many realise that, paradoxically, one way to win the struggle for existence is to pursue the snuggle for existence: to cooperate.
We already do this to a remarkable extent. Even the simplest activities of everyday life involve much more cooperation than you might think. Consider, for example, stopping at a coffee shop one morning to have a cappuccino and croissant for breakfast. To enjoy that simple pleasure could draw on the labors of a small army of people from at least half a dozen countries. Delivering that snack also relied on a vast number of ideas, which have been widely disseminated around the world down the generations by the medium of language.
Now we have remarkable new insights into what makes us all work together. Building on the work of many others, Martin Nowak of Harvard University has identified at least five basic mechanisms of cooperation. What I find stunning is that he shows the way that we human beings collaborate is as clearly described by mathematics as the descent of the apple that once fell in Newton's garden. The implications of this new understanding are profound.
Global human cooperation now teeters on a threshold. The accelerating wealth and industry of Earth's increasing inhabitants — itself a triumph of cooperation-is exhausting the ability of our home planet to support us all. Many problems that challenge us today can be traced back to a profound tension between what is good and desirable for society as a whole and what is good and desirable for an individual. That conflict can be found in global problems such as climate change, pollution, resource depletion, poverty, hunger, and overpopulation.
As once argued by the American ecologist Garrett Hardin, the biggest issues of all — saving the planet and maximizing the collective lifetime of the species Homo sapiens — cannot be solved by technology alone. If we are to win the struggle for existence, and avoid a precipitous fall, there's no choice but to harness this extraordinary creative force. It is down to all of us to refine and to extend our ability to cooperate.
Nowak's work contains a deeper message. Previously, there were only two basic principles of evolution — mutation and selection — where the former generates genetic diversity and the latter picks the individuals that are best suited to a given environment. We must now accept that cooperation is the third principle. From cooperation can emerge the constructive side of evolution, from genes to organisms to language and the extraordinarily complex social behaviors that underpin modern society.
We Are Unique
Brazilian physicist MARCELO GLEISER...
To improve everybody's cognitive toolkit, the required scientific concept has to be applicable to all humans. It needs to make a difference to us as a species, or, more to the point I am going to make, as a key factor in defining our collective role. This concept must impact the way we perceive who we are and why we are here. Hopefully, it will redefine the way we live our lives and plan for our collective future. This concept must make it clear that we matter.
A concept that might grow into this life-redefining powerhouse is the notion that we, humans in a rare planet, are unique and uniquely important. But what of Copernicanism — the notion that the more we learn about the universe the less important we become? I will argue that modern science, traditionally considered guilty of reducing our existence to a pointless accident in an indifferent universe, is actually saying the opposite. While it does say that we are an accident in an indifferent universe, it also says that we are a rare accident and thus not pointless.
But wait! Isn't it the opposite? Shouldn't we expect life to be common in the cosmos and us to be just one of many creatures out there? After all, as we discover more and more worlds circling other suns, the so-called exoplanets, we find an amazing array of possibilities. Also, given that the laws of physics and chemistry are the same across the universe, we should expect life to be ubiquitous: if it happened here, it must've happened in many other places. So why am I claiming that we are unique?
There is an enormous difference between life and intelligent life. By intelligent life I don't mean clever crows or dolphins, but minds capable of self-awareness and the ability to develop advanced technologies, that is, not just use what is at hand but transform materials into new devices that can perform a multitude of tasks. Keeping this definition in mind, I agree that single-celled life, although dependent on a multitude of physical and biochemical factors, shouldn't be an exclusive property of our planet. First, because life on Earth appeared almost as quickly as it could, no more than a few hundred million years after things quieted down enough; second, due to the existence of extremophiles, life forms capable of surviving in extreme conditions (very hot or cold, very acidic or/and radioactive, no oxygen, etc.), showing that life is very resilient and spreads into every niche that it can.
However, the existence of single-celled organisms doesn't necessarily lead to that of multicellular ones, much less to that of intelligent multicellular ones. Life is in the business of surviving the best way it can in a given environment. If the environment changes, those creatures that can survive under the new conditions will. Nothing in this dynamics supports the notion that once there is life all you have to do is wait long enough and puff, there pops a clever creature. (This smells of biological teleology, the concept that life's purpose is to create intelligent life, a notion that seduces many people for obvious reasons: it makes us the special outcome of some grand plan.) The history of life on Earth doesn't support this evolution toward intelligence: there have been many transitions toward greater complexity, none of them obvious: prokaryotic to eukaryotic unicellular creatures (and nothing more for 3 billion years!), unicellular to multicellular, sexual reproduction, mammals, intelligent mammals, edge.org...Play the movie differently, and we wouldn't be here.
As we look at planet Earth and the factors that came into play for us to be here, we quickly realize that our planet is very special. Here is a short list: the long-term existence of a protective and oxygen-rich atmosphere; Earth's axial tilt, stabilized by a single large moon; the ozone layer and the magnetic field that jointly protect surface creatures from lethal cosmic radiation; plate tectonics that regulate the levels of carbon dioxide and keep the global temperature stable; the fact that our sun is a smallish, fairly stable star not too prone to releasing huge plasma burps. Consequently, it's rather naive to expect life — at the complexity level that exists here — to be ubiquitous across the universe.
A further point: even if there is intelligent life elsewhere and, of course, we can't rule that out (science is much better at finding things that exist than at ruling out things that don't), it will be so remote that for all practical purposes we are alone. Even if SETI finds evidence of other cosmic intelligences, we are not going to initiate a very intense collaboration. And if we are alone, and alone have the awareness of what it means to be alive and of the importance of remaining alive, we gain a new kind of cosmic centrality, very different — and much more meaningful — from the religiously-inspired one of pre-Copernican days, when Earth was the center of Creation: we matter because we are rare and we know it.
The joint realization that we live in a remarkable cosmic cocoon and that we are able to create languages and rocket ships in an otherwise apparently dumb universe ought to be transformative. Until we find other self-aware intelligences, we are how the universe thinks. We might as well start enjoying each other's company.
Objects of Understanding and Communication
Architect, Cartographer; Founder, TED Conference; Author, 33: Understanding Change......
THE WAKING DREAM I HAVE FOR MY TOOLKIT IS ONE FILLED WITH OBJECTS OF UNDERSTANDING AND COMMUNICATION.
THE TOOLS IN MY TOOLBOX RESPOND TO ME. THEY NOD WHEN I TALK, GIVE ME EVIDENCE OF ME, AND SUGGEST SECONDARY AND TERTIARY JOURNEYS THAT EXTEND MY CURIOSITIES.
THIS TOOLKIT IS WOVEN OF THREADS OF IGNORANCE AND STITCHES OF QUESTIONS THAT INVITE KNOWLEDGE IN.
IN THIS WEAVE ARE MAPS AND PATTERNS WITH ENOUGH STITCHES TO ALLOW ME TO MAKE THE CHOICE, AS I WISH, TO ADD A TINY DROP OF SUPERGLUE.
I WANT AN iPHONE / iPAD / iMAC THAT NODS.
THE FIRST MOVIES ARCHIVED STAGE SHOWS.
THE iPAD AND KINDLE ARCHIVE MAGAZINES, NEWSPAPERS AND BOOKS.
I WANT A NEW MODALITY WITH WHICH I CAN CONVERSE AT DIFFERING LEVELS OF COMPLEXITY, IN DIFFERENT LANGUAGES, AND WHICH UNDERSTANDS THE NUANCE OF MY QUESTIONS.
I WANT HELP FLYING THROUGH MY WAKING DREAMS CONNECTING THE THREADS OF THESE EPIPHANIES.
I BELIEVE WE ARE AT THIS CUSP.
A FIRST TOE IN THE WARM BATH OF THIS NEW MODALITY.
Associate Professor of Physics, University of California, Santa Cruz...
Paradoxes arise when one or more convincing truths contradiction either each other, clash with other convincing truths, or violate unshakeable intuitions. They are frustrating, yet beguiling. Many see virtue in avoiding, glossing over, or dismissing them. Instead we should seek them out, if we find one sharpen it, push it to the extreme, and hope that the resolution will reveal itself, for with that resolution will invariably come a dose of Truth.
History is replete with examples, and with failed opportunities. One of my favorites is Olber's paradox. Suppose the universe were filled with an eternal roughly uniform distribution of shining stars. Faraway stars would look dim because they take up a tiny angle on the sky; but within that angle they are as bright as the Sun's surface. Yet in an eternal and infinite (or finite but unbounded) space, every direction would lie within the angle taken up by some star. The sky would be alight like the surface of the sun. Thus, a simple glance at the dark night sky reveals that the universe must be dynamic: expanding, or evolving. Astronomers grappled with this paradox for several centuries, devising unworkable schemes for its resolution. Despite at least one correct view (by Edgar Allen Poe!), the implications never really permeated even the small community of people thinking about the fundamental structure of the universe. And so it was that Einstein, when he went to apply his new theory to the universe, sought an eternal and static model that could never make sense, introduced a term into his equations which he called his greatest blunder, and failed to invent the big-bang theory of cosmology.
Nature appears to contradict itself with the utmost rarity, and so a paradox can be opportunity for us to lay bare our cherished assumptions, and discover which of them we must let go. But a good paradox can take us farther, to reveal that the not just the assumptions but the very modes of thinking we employed in creating the paradox must be replaced. Particles and waves? Not truth, just convenient models. The same number of integers as perfect squares of integers? Not crazy, though you might be if you invent cardinality. This sentence is false. And so, says Godel, might be the foundations of any formal system that can refer to itself. The list goes on.
What next? I've got a few big ones I'm wrestling with. How can thermodynamics' second law arise unless cosmological initial conditions are fine-tuned in a way we would never accept in any other theory or explanation of anything? How do we do science if the universe is infinite, and every outcome of every experiment occurs infinitely many times?
What impossibility is nagging at you?
We are Lost in Thought
Neuroscientist; Chairman, The Reason Project; Author, Letter to a Christian Nation...
I invite you to pay attention to anything — the sight of this text, the sensation of breathing, the feeling of your body resting against your chair — for a mere sixty seconds without getting distracted by discursive thought. It sounds simple enough: Just pay attention. The truth, however, is that you will find the task impossible. If the lives of your children depended on it, you could not focus on anything — even the feeling of a knife at your throat — for more than a few seconds, before your awareness would be submerged again by the flow of thought. This forced plunge into unreality is a problem. In fact, it is the problem from which every other problem in human life appears to be made.
I am by no means denying the importance of thinking. Linguistic thought is indispensable to us. It is the basis for planning, explicit learning, moral reasoning, and many other capacities that make us human. Thinking is the substance of every social relationship and cultural institution we have. It is also the foundation of science. But our habitual identification with the flow of thought — that is, our failure to recognize thoughts as thoughts, as transient appearances in consciousness — is a primary source of human suffering and confusion.
Our relationship to our own thinking is strange to the point of paradox, in fact. When we see a person walking down the street talking to himself, we generally assume that he is mentally ill. But we all talk to ourselves continuously — we just have the good sense to keep our mouths shut. Our lives in the present can scarcely be glimpsed through the veil of our discursivity: We tell ourselves what just happened, what almost happened, what should have happened, and what might yet happen. We ceaselessly reiterate our hopes and fears about the future. Rather than simply exist as ourselves, we seem to presume a relationship with ourselves. It's as though we are having a conversation with an imaginary friend possessed of infinite patience. Who are we talking to?
While most of us go through life feeling that we are the thinker of our thoughts and the experiencer of our experience, from the perspective of science we know that this is a distorted view. There is no discrete self or ego lurking like a minotaur in the labyrinth of the brain. There is no region of cortex or pathway of neural processing that occupies a privileged position with respect to our personhood. There is no unchanging "center of narrative gravity" (to use Daniel Dennett's phrase). In subjective terms, however, there seems to be one — to most of us, most of the time.
Our contemplative traditions (Hindu, Buddhist, Christian, Muslim, Jewish, etc.) also suggest, to varying degrees and with greater or lesser precision, that we live in the grip of a cognitive illusion. But the alternative to our captivity is almost always viewed through the lens of religious dogma. A Christian will recite the Lord's Prayer continuously over a weekend, experience a profound sense of clarity and peace, and judge this mental state to be fully corroborative of the doctrine of Christianity; A Hindu will spend an evening singing devotional songs to Krishna, feel suddenly free of his conventional sense of self, and conclude that his chosen deity has showered him with grace; a Sufi will spend hours whirling in circles, pierce the veil of thought for a time, and believe that he has established a direct connection to Allah.
The universality of these phenomena refutes the sectarian claims of any one religion. And, given that contemplatives generally present their experiences of self-transcendence as inseparable from their associated theology, mythology, and metaphysics, it is no surprise that scientists and nonbelievers tend to view their reports as the product of disordered minds, or as exaggerated accounts of far more common mental states — like scientific awe, aesthetic enjoyment, artistic inspiration, etc.
Our religions are clearly false, even if certain classically religious experiences are worth having. If we want to actually understand the mind, and overcome some of the most dangerous and enduring sources of conflict in our world, we must begin thinking about the full spectrum of human experience in the context of science.
But we must first realize that we are lost in thought.
The Mediocrity Principle
Biologist and associate professor at the University of Minnesota...
As someone who just spent a term teaching freshman introductory biology, and will be doing it again in the coming months, I have to say that the first thing that leapt to my mind as an essential skill everyone should have was algebra. And elementary probability and statistics. That sure would make my life easier, anyway — there's something terribly depressing about seeing bright students tripped up by a basic math skill that they should have mastered in grade school.
But that isn't enough. Elementary math skills are an essential tool that we ought to be able to take for granted in a scientific and technological society. What idea should people grasp to better understand their place in the universe?
I'm going to recommend the mediocrity principle. It's fundamental to science, and it's also one of the most contentious, difficult concepts for many people to grasp — and opposition to the mediocrity principle is one of the major linchpins of religion and creationism and jingoism and failed social policies. There are a lot of cognitive ills that would be neatly wrapped up and easily disposed of if only everyone understood this one simple idea.
The mediocrity principle simply states that you aren't special. The universe does not revolve around you, this planet isn't privileged in any unique way, your country is not the perfect product of divine destiny, your existence isn't the product of directed, intentional fate, and that tuna sandwich you had for lunch was not plotting to give you indigestion. Most of what happens in the world is just a consequence of natural, universal laws — laws that apply everywhere and to everything, with no special exemptions or amplifications for your benefit — given variety by the input of chance. Everything that you as a human being consider cosmically important is an accident. The rules of inheritance and the nature of biology meant that when your parents had a baby, it was anatomically human and mostly fully functional physiologically, but the unique combination of traits that make you male or female, tall or short, brown-eyed or blue-eyed were the result of a chance shuffle of genetic attributes during meiosis, a few random mutations, and the luck of the draw in the grand sperm race at fertilization.
Don't feel bad about that, though, it's not just you. The stars themselves form as a result of the properties of atoms, the specific features of each star set by the chance distribution of ripples of condensation through clouds of dust and gas. Our sun wasn't required to be where it is, with the luminosity it has — it just happens to be there, and our existence follows from this opportunity. Our species itself is partly shaped by the force of our environment through selection, and partly by fluctuations of chance. If humans had gone extinct 100,000 years ago, the world would go on turning, life would go on thriving, and some other species would be prospering in our place — and most likely not by following the same intelligence-driven technological path we did.
And if you understand the mediocrity principle, that's OK.
The reason this is so essential to science is that it's the beginning of understanding how we came to be here and how everything works. We look for general principles that apply to the universe as a whole first, and those explain much of the story; and then we look for the quirks and exceptions that led to the details. It's a strategy that succeeds and is useful in gaining a deeper knowledge. Starting with a presumption that a subject of interest represents a violation of the properties of the universe, that it was poofed uniquely into existence with a specific purpose, and that the conditions of its existence can no longer apply, means that you have leapt to an unfounded and unusual explanation with no legitimate reason. What the mediocrity principle tells us is that our state is not the product of intent, that the universe lacks both malice and benevolence, but that everything does follow rules — and that grasping those rules should be the goal of science.
Correlation is not a cause
Psychologist; Author, Consciousness: An Introduction...
The phrase "correlation is not a cause" (CINAC) may be familiar to every scientist but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.
One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when teaching experimental design to nurses, physiotherapists and other assorted groups. They usually understood my favourite example: imagine you are watching at a railway station. More and more people arrive until the platform is crowded, and then — hey presto — along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B).
I soon discovered that this understanding tended to slip away again and again, until I began a new regime, and started every lecture with an invented example to get them thinking.
"Right", I might say "Suppose it's been discovered (I don't mean it's true) that children who eat more tomato ketchup do worse in their exams. Why could this be?" They would argue that it wasn't true (I'd explain the point of thought experiments again). "But there'd be health warnings on ketchup if it's poisonous" (Just pretend it's true for now please) and then they'd start using their imaginations.
"There's something in the ketchup that slows down nerves", "Eating ketchup makes you watch more telly instead of doing your homework", "Eating more ketchup means eating more chips and that makes you fat and lazy". Yes, yes, probably wrong but great examples of A causes B — go on. And so to "Stupid people have different taste buds and don't like ketchup", "Maybe if you don't pass your exams your Mum gives you ketchup". And finally " "Poorer people eat more junk food and do less well at school".
Next week: "Suppose we find that the more often people consult astrologers or psychics the longer they live." "But it can't be true — astrology's bunkum" (Sigh … just pretend it's true for now please.) OK. "Astrologers have a special psychic energy that they radiate to their clients", "Knowing the future means you can avoid dying", "Understanding your horoscope makes you happier and healthier" Yes, yes, excellent ideas, go on. "The older people get the more often they go to psychics", "Being healthy makes you more spiritual and so you seek out spiritual guidance". Yes, yes, keep going, all testable ideas, and finally "Women go to psychics more often and also live longer than men."
The point is that once you greet any new correlation with "CINAC" your imagination is let loose. Once you listen to every new science story Cinacally (which conveniently sounds like "cynically") you find yourself thinking: OK, if A doesn't cause B, could B cause A? Could something else cause them both or could they both be the same thing even though they don't appear to be? What's going on? Can I imagine other possibilities? Could I test them? Could I find out which is true? Then you can be critical of the science stories you hear. Then you are thinking like a scientist.
Stories of health scares and psychic claims may get people's attention but understanding that a correlation is not a cause could raise levels of debate over some of today's most pressing scientific issues. For example, we know that global temperature rise correlates with increasing levels of atmospheric carbon dioxide but why? Thinking Cinacally means asking which variable causes which or whether something else causes both, with important consequences for social action and the future of life on earth.
Some say that the greatest mystery facing science is the nature of consciousness. We seem to be independent selves having consciousness and free will, and yet the more we understand how the brain works, the less room there seems to be for consciousness to do anything. A popular way of trying to solve the mystery is the hunt for the "neural correlates of consciousness". For example, we know that brain activity in parts of the motor cortex and frontal lobes correlates with conscious decisions to act. But do our conscious decisions cause the brain activity, does the brain activity cause our decisions, or are both caused by something else?
The fourth possibility is that brain activity and conscious experiences are really the same thing, just as light turned out not to be caused by electromagnetic radiation but to be electromagnetic radiation, or heat turned out to be the movement of molecules in a fluid. At the moment we have no inkling of how consciousness could be brain activity but my guess is that it will turn out that way. Once we clear away some of our delusions about the nature of our own minds, we may finally see why there is no deep mystery and our conscious experiences simply are what is going on inside our brains. If this is right then there are no neural correlates of consciousness. But whether it is or not, remembering CINAC and working slowly from correlations to causes is likely to be how this mystery is finally solved.
Q. E. D. Moments
Information scientist and Professor of Electrical Engineering and Law at the......
Everyone should know what proof feels like. It reduces all other species of belief to a distant second-class status. Proof is the far end on a cognitive scale of confidence that varies through levels of doubt. And most people never experience it.
Feeling proof comes from finishing a proof. It does not come from pointing at a proof in a book or in the brain of an instructor. It comes when the prover himself takes the last logical step on the deductive staircase. Then he gets to celebrate that logical feat by declaring "Q. E. D." or "Quod Erat Demonstrandum" or just "Quite Easily Done." Q. E. D. states that he has proven or demonstrated the claim that he wanted to prove. The proof need not be original or surprising. It just needs to be logically correct to produce a Q. E. D. moment. A proof of the Pythagorean Theorem has always sufficed.
The only such proofs that warrant the name are those in mathematics and formal logic. Each logical step has to come with a logically sufficient justification. That way each logical step comes with binary certainty. Then the final result itself follows with binary certainty. It is as if the prover multiplied the number 1 by itself for each step in the proof. The result is still the number 1. That is why the final result warrants a declaration of Q. E. D. That is also why the process comes to an unequivocal halt if the prover cannot justify a step. Any act of faith or guesswork or cutting corners will destroy the proof and its demand for binary certainty.
The catch is that we can really only prove tautologies.
The great binary truths of mathematics are still logically equivalent to the tautology "1 = 1" or "green is green." This differs from the factual statements we make about the real world — statements such as "Pine needles are green" or "Chlorophyll molecules reflect green light."
These factual statements are approximations. They are technically vague or fuzzy. And they often come juxtaposed with probabilistic uncertainty: "Pine needles are green with high probability." Note that this last statement involves triple uncertainty. There is first the vagueness of green pine needles because there is no bright line between greenness and non-greenness. It is a matter of degree. There is second only a probability whether pine needles have the vague property of greenness. And there is last the magnitude of the probability itself. The magnitude is the vague or fuzzy descriptor "high" because here too there is no bright line between high probability and not-high probability.
No one has ever produced a statement of fact that has the same 100% binary truth status as a mathematical theorem. Even the most accurate energy predictions of quantum mechanics hold only out to a few decimal places. Binary truth would require getting it right out to infinitely many decimal places.
Most scientists know this and rightly sweat it. The logical premises of a math model only approximately match the world that the model purports to model. It is not at all clear how such grounding mismatches propagate through to the model's predictions. Each infected inferential step tends to degrade the confidence of the conclusion as if multiplying fractions less than one. Modern statistics can appeal to confidence bounds if there are enough samples and if the samples sufficiently approximate the binary assumptions of the model. That at least makes us pay in the coin of data for an increase in certainty.
It is a big step down from such imperfect scientific inference to the approximate syllogistic reasoning of the law. There the disputant insists that similar premises must lead to similar conclusions. But this similarity involves its own approximate pattern matching of inherently vague patterns of causal conduct or hidden mental states such as intent or foreseeability. The judge's final ruling of "granted" or "denied" resolves the issue in practice. But it is technically a non sequitir. The product of any numbers between zero and one is again always less than one. So the confidence of the conclusion can only fall as the steps in the deductive chain increase. The clang of the gavel is no substitute for proof.
Such approximate reasoning may be as close as we can come to a Q. E. D. moment when using natural language. The everyday arguments that buzz in our brains hit far humbler logical highs. That is precisely why we all need to prove something at least once — to experience at least one true Q. E. D. moment. Those rare but god-like tastes of ideal certainty can help keep us from mistaking it for anything else.
Professor of Psychology, University of Texas, Austin; Coauthor: Why Women Have Sex...
When most people think about evolution by selection, they conjure up phrases such as "survival of the fittest" or "nature red in tooth and claw." These focus attention on the Darwinian struggle for survival. Many scientists, but few others, know that evolution by selection occurs through the process of differential reproductive success by virtue of heritable differences in design, not by differential survival success. And differential reproductive success often boils down to differential mating success, the focus of Darwin's 1871 theory of sexual selection.
Darwin identified two separate (but potentially related) causal processes by which sexual selection occurs. The first, intrasexual or same-sex competition, involves members of one sex competing with each other in various contests, physical or otherwise, the winners of which gain preferential sexual access to mates. Qualities that lead to success evolve. Those linked to failure bite the evolutionary dust. Evolution, change over time, occurs as a consequence of the process of intrasexual competition. The second, intersexual selection, deals with preferential mate choice. If members of one sex exhibit a consensus about qualities desired in mates, and those qualities are partially heritable, then those of the opposite sex possessing the desired qualities have a mating advantage. They get preferentially chosen. Those lacking desired mating qualities get shunned, banished, and remain mateless (or must settle for low quality mates). Evolutionary change over time occurs as a consequence of an increase in frequency of desired traits and a decrease in frequency of disfavored traits.
Darwin's theory of sexual selection, controversial in his day and relatively neglected for nearly a century after its publication, has mushroomed today into a tremendously important theory in evolutionary biology and evolutionary psychology. Research on human mating strategies has exploded over the past decade, as the profound implications of sexual selection become more deeply understood. Adding sexual selection to everyone's cognitive toolkit will provide profound insights into many human phenomena that otherwise remain baffling. In its modern formulations, sexual selection theory provides answers to weighty and troubling questions that elude many scientists and most non-scientists living today:
• Why do male and female minds differ?
• What explains the rich menu of human mating strategies?
• Why is conflict between the sexes so pervasive?
• Why does conflict between women and men focus so heavily on sex?
• What explains sexual harassment and sexual coercion?
• Why do men die earlier than women, on average, in every culture around the world?
• Why are most murderers men?
• Why are men so much keener than women on forming coalitions for warfare?
• Why are men so much more prone to becoming suicide terrorists?
• Why is suicide terrorism so much more prevalent in polygynous cultures that create a greater pool of mateless males?
Adding sexual selection theory to everyone's cognitive toolkit, in short, provides deep insight into the nature of human nature, people's obsession with sex and mating, the origins of sex differences, and many of the profound social conflicts that beset us all.
Nexus causality, moral warfare and misattribution arbitrage.
Founder of field of Evolutionary Psychology; Co-Director, UC Santa Barbara's......
We could become far more intelligent than we are by adding to our stock of concepts, and by forcing ourselves to use them even when we don't like what they are telling us. This will be nearly always, because they generally tell us that our self-evidently superior selves and ingroups are error-besotted. We all start from radical ignorance in a world that is endlessly strange, vast, complex, intricate, and surprising. Deliverance from ignorance lies in good concepts — inference fountains that geyser out insights that organize and increase the scope of our understanding. We are drawn to them by the fascination of the discoveries they afford, but resist using them well and freely because they would reveal too many of our apparent achievements to be embarrassing or tragic failures. Those of us who are non-mythical lack the spine that Oedipus had — the obsidian resolve that drove him to piece together shattering realizations despite portents warning him off. Because of our weakness, "to see what is in front of one's nose needs a constant struggle" as Orwell says. So why struggle? Better instead to have one's nose and what lies beyond shift out of focus — to make oneself hysterically blind as convenience dictates, rather than to risk ending up like Oedipus, literally blinding oneself in horror at the harvest of an exhausting, successful struggle to discover what is true.
Alternatively, even modest individual-level improvements in our conceptual toolkit can have transformative effects on our collective intelligence, promoting incandescent intellectual chain reactions among multitudes of interacting individuals. If this promise of intelligence-amplification through conceptual tools seems like hyperbole, consider that the least inspired modern engineer, equipped with the conceptual tools of calculus, can understand, plan and build things far beyond what da Vinci or the mathematics-revering Plato could have achieved without it. We owe a lot to the infinitesimal, Newton's counterintuitive conceptual hack — something greater than zero but less than any finite magnitude. Far simpler conceptual innovations than calculus have had even more far reaching effects — the experiment (a danger to authority), zero, entropy, Boyle's atom, mathematical proof, natural selection, randomness, particulate inheritance, Dalton's element, distribution, formal logic, culture, Shannon's definition of information, the quantum…
Here are three simple conceptual tools that might help us see in front of our noses: nexus causality, moral warfare, and misattribution arbitrage. Causality itself is an evolved conceptual tool that simplifies, schematizes, and focuses our representation of situations. This cognitive machinery guides us to think in terms of the cause — of an outcome having a single cause. Yet for enlarged understanding, it is more accurate to represent outcomes as caused by an intersection or nexus of factors (including the absence of precluding conditions). InWar and Peace, Tolstoy asks "When an apple ripens and falls, why does it fall? Because of its attraction to the earth, because its stem withers, because it is dried by the sun, because it grows heavier, because the wind shakes it….?" — with little effort any modern scientist could extend Tolstoy's list endlessly. We evolved, however, as cognitively improvisational tool-users, dependent on identifying actions we could take that lead to immediate payoffs. So, our minds evolved to represent situations in a way that highlighted the element in the nexus that we could manipulate to bring about a favored outcome. Elements in the situation that remained stable and that we could not change (like gravity or human nature) were left out of our representation of causes. Similarly, variable factors in the nexus (like the wind blowing) that we could not control, but that predicted an outcome (the apple falling), were also useful to represent as causes, in order to prepare ourselves to exploit opportunities or avoid dangers. So the reality of the causal nexus is cognitively ignored in favor of the cartoon of single causes. While useful for a forager, this machinery impoverishes our scientific understanding, rendering discussions (whether elite, scientific, or public) of the "causes" — of cancer, war, violence, mental disorders, infidelity, unemployment, climate, poverty, and so on — ridiculous.
Similarly, as players of evolved social games, we are designed to represent others' behavior and associated outcomes as caused by free will (by intentions). That is, we evolved to view "man" as Aristotle put it, as "the originator of his own actions." Given an outcome we dislike, we ignore the nexus, and trace "the" causal chain back to a person. We typically represent the backward chain as ending in — and the outcome as originating in — the person. Locating the "cause" (blame) in one or more persons allows us to punitively motivate others to avoid causing outcomes we don't like (or to incentivize outcomes we do like). More despicably, if something happens that many regard as a bad outcome, this gives us the opportunity to sift through the causal nexus for the one thread that colorably leads back to our rivals (where the blame obviously lies). Lamentably, much of our species' moral psychology evolved for moral warfare, a ruthless zero-sum game. Offensive play typically involves recruiting others to disadvantage or eliminate our rivals by publically sourcing them as the cause of bad outcomes. Defensive play involves giving our rivals no ammunition to mobilize others against us.
The moral game of blame attribution is only one subtype of misattribution arbitrage. For example, epidemiologists estimate that it was not until 1905 that you were better off going to a physician. (Semelweiss noticed that doctors doubled the mortality rate of mothers at delivery). For thousands of years, the role of the physician pre-existed its rational function, so why were there physicians? Economists, forecasters, and professional portfolio managers typically do no better than chance, yet command immense salaries for their services. Food prices are driven up to starvation levels in underdeveloped countries, based on climate models that cannot successfully retrodict known climate history. Liability lawyers win huge sums for plaintiffs who get diseases at no higher rates than others not exposed to "the" supposed cause. What is going on? The complexity and noise permeating any real causal nexus generates a fog of uncertainty. Slight biases in causal attribution, or in blameworthiness (e.g., sins of commission are worse than sins of omission) allow a stable niche for extracting undeserved credit or targeting undeserved blame. If the patient recovers, it was due to my heroic efforts; if not, the underlying disease was too severe. If it weren't for my macroeconomic policy, the economy would be even worse. The abandonment of moral warfare, and a wider appreciation of nexus causality and misattribution arbitrage would help us all shed at least some of the destructive delusions that cost humanity so much.
Homo Sensus-Sapiens: The animal that feels and rationalizes
For the last three years, Mexican narcotraffickers have decapitated hundreds of people to gain control of routes for transporting cocaine. In the last two decades Colombian narco-paramilitaries tortured and incinerated thousands of people, in part, because they needed more land for their crops and for transporting cocaine. In both cases, they were not satisfied with 10 or 100 million dollars; even the richest narcotraffickers, kill or die for more.
In Guatemala and Honduras, cruel mortal battles between gangs known as "maras", happen for gaining control of a street in a poor neighborhood. In Rwanda's genocide, in 1994, people who had been friends for their entire life suddenly became mortal enemies, because of their ethnic appearance.
Is this the enlightened man?
These cases may sound like rarities. However, in any city, in any random street, it is easy to find a thief who is willing to kill or die for 10 bucks to satisfy the need for heroin, a fanatic who is willing to kill or die for defending a "merciful God", or a regular guy next-door willing to kill or die in a fight after a car crash.
Is this rationality?
It is easy to find examples in which automatic responses of emotions and feelings, like ambition, anger or anxiety overcome rationality. Those responses keep assaulting us like uncontrollable forces of nature; like earthquakes or storms.
We modern humans, taxonomically define ourselves as Homo Sapiens Sapiens, that is, wise-wise beings. Apparently, we can dominate the influence of natural forces, no matter if they are instincts, viruses or storms. Homo Sapiens Sapiens represents the overconfidence of the enlightened man who understands and manipulates nature, while making the best decisions. However, we cannot avoid destroying natural resources while consuming more than we need. We cannot control excessive ambition. We cannot avoid surrendering to the power of sex or money. Despite our evolved brain, despite our capacity to argue and think in abstract ways, despite the amazing power of the neocortex, inner feelings are still at the base of our behavior.
The WisdomX2 characteristic typically does not coincide exactly with our neuropsychological reality. To discover it, you can pay attention to your everyday actions, you can trust neurological observations showing how instinctive areas of the brain are active most of the time or you can trust evidence showing how our nervous system is constantly at the mercy of neurotransmitters and hormones determining levels of emotional responses.
Observations from experimental psychology and behavioral economics also show that people do not always try to maximize present or future profits. Rational expectations, once thought as the main characteristic of Homo Economicus are not neurologically sustainable anymore. Sometimes people care nothing about future or profit; sometimes, we only want to satisfy a desire, right here, right now, no matter what.
Human beings have unique rational capacities indeed. No other animal can evaluate, simulate and decide for the best, like humans do. However, "having" the capacity doesn't imply "executing" it.
The inner and oldest areas of human's brain, the reptilian brain, generate and regulate instinctive and automatic responses, which have a role in preserving the organism. Because of these areas, we move without analyzing the consequence of each action; we move like a machine of automatic and unconscious induction. We walk without determining if the floor's structure will remain after each step and we run faster than normal when we feel a threat, not because of rational planning, but because of automatic responses.
Only a strict training allows us to dominate instincts. However, for most of us, the "don't panic" advice only works when we are not in panic. Most of us should be defined as beings firstly moved by instincts, social empathy and automatic responses resulting from perceptions, instead of sophisticated plans and arguments.
Homo economicus and Homo Politicus are, therefore, normative entelechies, behavioral benchmarks instead of descriptive models. Always calculating utility and always resolving social disputes through civilized debates are behavioral utopias instead of adjusted descriptions of what we are. However, for decades we've been constructing policies, models and sciences based on these assumptions not coinciding with reality.
Homo Sensus Sapiens is a more accurate image of the human being.
The concepts of the liberal hyper-rationalist man and the conservative hyper-communitarian man are hypertrophies of a single human facet. The first one is the hypertrophy of the neocortex: the idea that rationality dominates instincts. The second one is the hypertrophy of the inner reptilian brain: the idea that social empathy and cohesive institutions define humanity. However we are both at the same time. We are the tension of the sensus and the sapiens.
The concept of Homo Sensus Sapiens allows us to realize that we are at a point somewhere between overconfidence on rational capacities, and resignation to instincts. Homo Sensus Sapiens reminds us that we cannot surrender or escape from rationality or instincts. But this concept is not only about criticizing overconfidence or resignation. It is about improving explanations of social phenomena. Social Scientists should not always choose between rationality/irrationality. They should get out of the comfort zone of positivist fragmentation, and integrate scientific areas to explain an analogue human being, not a digital one, defined by the continuum between sensitivity and rationality. Better inputs for public policy would be proposed with this adjusted image.
The first character of this Homo, the Sensus, allows movement, reproduction, atomization of his biology, and preservation of the species. The second part, the Sapiens, allows this Homo to psychologically oscillate between the ontological world of matter and energy, and the epistemological world of socio-cultural codification, imagination, arts, technology and symbolic construction. This combination allows understanding of the nature of a hominid characterized by the constant tension between emotions and reason, and the search of a middle point of biological and cultural evolution. We are not only fears, not only plans. We are Homo Sensus Sapiens, the animal that feels and rationalizes.
Psychologist, Yale University; Author, Descartes' Baby...
We are powerfully influenced by irrational processes such as unconscious priming, conformity, groupthink, and self-serving biases. These affect the most trivial aspects of our lives, such as how quickly we walk down a city street, and the most important, such as who we choose to marry. The political and moral realms are particularly vulnerable to such influences. While many of us would like to think that our views on climate change or torture or foreign policy are the result of rational deliberation, we are more affected than we would like to admit by considerations that have nothing to do with reason.
But this is not inevitable. Consider science. Plainly, scientists are human and possess the standard slate of biases and prejudices and mindbugs. This is what skeptics emphasize when they say that science is "just another means of knowing" or "just like religion". But science also includes procedures — such as replicable experiments and open debate — that cultivate the capacity for human reason. Scientists can reject common wisdom, they can be persuaded by data and argument to change their minds. It is through these procedures that we have discovered extraordinary facts about the world, such as the structure of matter and the evolutionary relationship between monkey and man.
The cultivation of reason isn't unique to science; other disciplines such as mathematics and philosophy possess it as well. But it is absent in much of the rest of life. So I admit to twisting the question a bit: The concept that people need to add to their toolkit isn't a scientific discovery; it is science itself. Wouldn't the world be better off if, as we struggle with moral and political and social problems, we adopted those procedures that make science so successful?
Commentator on Internet and politics...
Constant awareness of the Eintstellung Effect would make a useful addition to our cognitive toolkit.
The Einstellung Effect is more ubiquitous than its name suggests. We constantly experience it when trying to solve a problem by pursuing solutions that have worked for us in the past - instead of evaluating and addressing it on its own terms. Thus, while we may eventually solve the problem, we may also be wasting an opportunity to so in a more rapid, effective, and resourceful manner.
Think of a chess match. If you are a chess master with a deep familarity with chess history, you are likely to spot game developments that look similar to other matches that you know by heart. Knowing how those previous matches unfolded, you may automatically pursue similar solutions.
This may be the right thing to do in matches that are exactly alike - but in all other situations, you've got to watch out! Familar solutions may not be optima. Some recent research into the occurences of the Einstellung Effect in chess players suggests that it tends to be less prominent once players reach a certain level of mastery, getting a better grasp of the risks associated with pursuing solutions that look familiar and trying to avoid acting on "autopilot".
The irony here is that the more expansive our cognitive toolkit, the more likely we are to fall back on solutions and approaches that have worked in the past instead of asking whether the problem in front of us is fundamentally different from anything else we have dealt with in the past. A cognitive toolkit that has no built-in awareness of the Einstellung Effect seems somewhat defective to me.
Professor of Evolutionary Biology, Reading University, England and The Santa Fe......
The Oracle of Delphi famously pronounced Socrates to be "the most intelligent man in the world because he knew that he knew nothing". Over 2000 years later the physicist-turned-historian Jacob Bronowski would emphasize — in the last episode of his landmark 1970s television series the "Ascent of Man" — the danger of our all-too-human conceit of thinking we know something. What Socrates knew and what Bronowski had come to appreciate is that knowledge — true knowledge — is difficult, maybe even impossible, to come buy, it is prone to misunderstanding and counterfactuals, and most importantly it can never be acquired with exact precision, there will always be some element of doubt about anything we come to "know"' from our observations of the world.
What is it that adds doubt to our knowledge? It is not just the complexity of life: uncertainty is built in to anything we measure. No matter how well you can measure something, you might be wrong by up to ½ of the smallest unit you can discern.
If you tell me I am 6 feet tall, and you can measure to the nearest inch, I might actually be 5' 11 and ½" or 6' and ½" and you (and I) won't know the difference. If something is really small you won't even be able to measure it, and if it is really really small a light microscope (and thus your eye, both of which can only see objects larger than the shortest wavelength of visible light) won't even know it is there. What if you measure something repeatedly?
This helps, but consider the plight of those charged with international standards of weights and measures. There is a lump of metal stored under a glass case in Sèvres, France. It is, by the decree of Le Système International d'Unités, the definition a kilogram. How much does it weigh? Well, by definition whatever it weighs is a kilogram. But the fascinating thing is that it has never weighed exactly the same twice. On those days it weighs less than a kilogram you are not getting such a good deal at the grocery store. On other days you are.
The often blithe way in which scientific "findings" are reported by the popular press can mask just how difficult it is to acquire reliable knowledge. Height and weight are — as far as we know — single dimensions. Consider then how much more difficult it is to measure something like intelligence, the risk of getting cancer from eating too much meat, whether cannibas should be legalized, whether the climate is warming and why, what a "shorthand abstraction" or even "science" is, the risk of developing psychosis from drug abuse, the best way to lose weight, whether it is better to force people receiving state benefits to work, whether prisons work, how to quit smoking, whether a glass of wine every day is good for you, whether it will hurt your children's eyes to use 3D glasses, or even just the best way to brush your teeth. In each case, what was actually measured, or who was measured, who were they compared to, for how long, are they like you and me, were there other factors that could explain the outcome?
The elusive nature of knowledge should remind us to be humble when interpreting it and acting on it, and this should grant us both a tolerance and skepticism towards others and their interpretations:knowledge should always be treated as a hypothesis.
It has only just recently emerged that Bronowski himself was involved in the Second World War project to design nuclear weapons — vicious projectiles of death that don't discriminate between good guys and bad guys. Maybe Bronowski's later humility was borne of this realization — that our views can be wrong and they can have consequences for others' lives.
Eager detractors of science as a way of understanding the world will jump on these ideas with glee, waving them about as proof that "nothing is real" and that science and its outputs are as much a human construct as art or religion. This is facile, ignorant and naïve.
Measurement and the "science" or theories it spawns must be treated with humility precisely because they are powerful ways of understanding and manipulating the world. Their observations can be replicated — even if imperfectly — and others can agree on how to make the measurements on which they depend, be they of intelligence, the mass of the Higgs boson, poverty, the speed at which proteins can fold into their three dimensional structures, or how big gorillas are.
No other system for acquiring knowledge even comes close to science, but this is precisely why we must treat its conclusions with humility. Einstein knew this when he said "all our science measured against reality is primitive and childlike" and yet he added "it is the most precious thing we have".
The Pessimistic Meta-Induction from the History of Science
Okay, okay: it's a terrible phrase. (In my defense, I didn't coin it. Philosophers of science have been kicking it around for a while.) But if "The Pessimistic Meta-Induction from the History of Science" is cumbersome to say and difficult to remember, it is also a great idea. In fact, as the "meta" part suggests, it's the kind of idea that puts all other ideas into perspective.
Here's the gist: because so many scientific theories from bygone eras have turned out to be wrong, we must assume that most of today's theories will eventually prove incorrect as well. And what goes for science goes in general. Politics, economics, technology, law, religion, medicine, child-rearing, education: no matter the domain of life, one generation's verities so often become the next generation's falsehoods that we might as well have a Pessimistic Meta-Induction from the History of Everything.
Good scientists understand this. They recognize that they are part of a long process of approximation. They know that they are constructing models rather than revealing reality. They are comfortable working under conditions of uncertainty — not just the local uncertainty of "Will this data bear out my hypothesis?", but the sweeping uncertainty of simultaneously pursuing and being cut off from absolute truth.
The rest of us, by contrast, often engage in a kind of tacit chronological exceptionalism. Unlike all those suckers who fell for the flat earth or the geocentric universe or cold fusion or the cosmological constant, we ourselves have the great good luck to be alive during the very apex of accurate human thought. The literary critic Harry Levin put this nicely: "The habit of equating one's age with the apogee of civilization, one's town with the hub of the universe, one's horizons with the limits of human awareness, is paradoxically widespread." At best, we nurture the fantasy that knowledge is always cumulative, and therefore concede that future eras will know more than we do. But we ignore or resist the fact that knowledge collapses as often as it accretes, that our own most cherished beliefs might appear patently false to posterity.
That fact is the essence of the meta-induction — and yet, despite its name, this idea is not pessimistic. Or rather, it is only pessimistic if you hate being wrong. If, by contrast, you think that uncovering your mistakes is one of the best ways to revise and improve your understanding of the world, then this is actually a highly optimisticinsight.
The idea behind the meta-induction is that all of our theories are fundamentally provisional and quite possibly wrong. If we can add that idea to our cognitive toolkit, we will be better able to listen with curiosity and empathy to those whose theories contradict our own. We will be better able to pay attention to counterevidencethose anomalous bits of data that make our picture of the world a little weirder, more mysterious, less clean, less done. And we will be able to hold our own beliefs a bit more humbly, in the happy knowledge that better ideas are almost certainly on the way.
A Cognitive Toolkit Full Of Garbage
Brain researcher, Chair of the Board of Directors at the Center for Human Sciences,...
To get rid of garbage is essential, also of mental garbage. Cognitive toolkits are filled with such garbage, simply because we are victims of ourselves. We should regularly empty this garbage can, or in case we enjoy to sit in garbage, we better check how "shorthand abstractions" (SHA's) limit our creativity (certainly an SHA). Why is the cognitive toolkit filled with garbage?
Let us look back in history (SHA): Modern science (SHA) can be said to have started in 1620 with "Novum Organum" ("New Instrument") by Francis Bacon. It should impress us today that his analysis (SHA) begins with a description (SHA) of four mistakes we run into when we do science. Unfortunately, we usually forget these warnings. Francis Bacon argued that we are — first — victims of evolution (SHA), i.e. that our genes (SHA), define constraints that necessarily limit insight (SHA). Second — we suffer from the constraints of imprinting (SHA); the culture (SHA) we live in provides a frame for epigenetic programs (SHA) that ultimately define the structure (SHA) of neuronal processing (SHA). Third — we are corrupted by language (SHA) as thoughts (SHA) cannot be easily transformed into verbal expressions . Fourth — we are guided or even controlled by theories (SHA), may they be explicit or implicit.
What are the implications for a cognitive toolkit? We are caught for instance in a language trap. On the basis of our evolutionary heritage we have the power of abstraction (SHA), but this has inspite of some advantages we brag about (to make us superior to other creatures) a disastrous consequence: Abstractions are usually represented in words; apparently we cannot do otherwise; we have to "ontologize"; we invent nouns to extract knowledge (SHA) from processes (SHA). ( I do not refer to the powerful pictorial shorthand abstractions). Abstraction is obviously complexity reduction (SHA). We make it simple. Why do we do this? Evolutionary heritage dictates to be fast. However, speed may give an advantage for a "survival toolkit", but not for a "cognitive toolkit". It is a categorical error (SHA) to confuse speed in action with speed in thinking. The selection pressure for speed invites to neglect the richness of facts. This pressure allows the invention (SHA) of a simple, clear, easy to understand, easy to refer to, easy to communicate shorthand abstraction. Thus, because we are a victim of our biological past and as a consequence a victim of ourselves we end up with shabby SHA's having left behind reality. If there is one disease all humans share, it is "monocausalitis", i.e. the motivation (SHA) to explain everything on the basis of just one cause. This may be a nice intellectual exercise but it is simply misleading.
Of course we depend on communication (SHA), and this requires verbal references usually tagged with language. But if we do not understand within the communicative frame or reference system (SHA) that we are a victim of ourselves by "ontologizing" and continuously creating "practical" SHA's, we simply use a cognitive toolkit of mental garbage. Is there a pragmatic way out other than to radically get rid of mental garbage? Yes, perhaps: Simply not using the the key SHA's explicitly in one's toolkit. Working on "consciousness", don't use (at least for one year) the SHA consciousness; if you work on the "self", never refer explicitly to self. Going through the own garbage one discovers many misleading SHA's, like just a few in my focus of attention (SHA): the brain as a net, localization of function, representation, inhibition, threshold, decision, the present, .... An easy way out is of course to refer to some of these SHA's as metaphors (SHA), but this again is evaiding the problem (SHA). I am aware of the fact (SHA) that I am also a victim of evolution, and to suggest "garbage" as a SHA also suffers from the same problem; even the concept of garbage required a discovery (SHA). But we cannot do otherwise than simply being aware of this challenge (SHA) that the content of the cognitive toolkit is characterized by self–referentiality (SHA), i.e. by the fact that the SHA's define themselves by their unreflected use.
Assistant professor of psychology at the University of California...
On its face, defeasibility is a modest concept with roots in logic and epistemology. An inference is defeasible if it can potentially be "defeated" in light of additional information. Unlike deductively sound conclusions, the products of defeasible reasoning remain subject to revision, held tentatively no matter how firmly.
All scientific claims — whether textbook pronouncements or haphazard speculations — are held defeasibly. It is a hallmark of the scientific process that claims are forever vulnerable to refinement and rejection, hostage to what the future could bring. Far from being a weakness, this is a source of science's greatness. Because scientific inferences are defeasible, they remain responsive to a world that can reveal itself gradually, change over time, and deviate from our dearest assumptions.
The concept of defeasibilility has proven valuable in characterizing artificial and natural intelligence. Everyday inferences, no less than scientific inferences, are vetted by the harsh judge of novel data: additional information that can potentially defeat current beliefs. On further inspection, the antique may turn out to be a fake and the alleged culprit an innocent victim. Dealing with an uncertain world forces cognitive systems to abandon the comforts of deduction and engage in defeasible reasoning.
Defeasibility is a powerful concept when we recognize it not as a modest term of art, but as the proper attitude towards all belief. Between blind faith and radical skepticism is a vast but sparsely populated space where defeasibility finds its home. Irreversible commitments would be foolish; boundless doubt paralyzing. Defeasible beliefs provide the provisional certainty necessary to navigate an uncertain world.
Recognizing the potential revisability of our beliefs is a prerequisite to rational discourse and progress, be it in science, politics, religion, or the mundane negotiations of daily life. Consider the world we could live in if all of our local and global leaders, if all of our personal and professional friends and foes, recognized the defeasibility of their beliefs and acted accordingly. That sure sounds like progress to me. But of course, I could be wrong.
Time Span of Discretion
Technology Forecaster; Consulting Associate Professor, Stanford University...
Half a century ago, while advising a UK Metals company, Elliott Jaques had a deep and controversial insight. He noticed that workers at different levels of the company had very different time horizons. Line workers focused on tasks that could be completed in a single shift, while managers devoted their energies to tasks requiring six months or more to complete. Meanwhile, their CEO was pursuing goals realizable only over the span of several years.
After several decades of empirical study, Jaques concluded that just as humans differ in intelligence, we differ in our ability to handle time-dependent complexity. We all have a natural time horizon we are comfortable with, what Jaques called "Time span of discretion," or the length of the longest task an individual can successfully undertake. Jaques observed that organizations implicitly recognize this fact in everything from titles to salary: line workers are paid hourly, managers annually, and senior executives compensated with longer-term incentives such as stock options.
Jaques also noted that effective organizations were comprised of workers of differing time spans of discretion, each working at a level of natural comfort. If a worker's job was beyond their natural time span of discretion, they would fail. If it was less, they would be insufficiently challenged, and thus unhappy.
Time span of discretion is about achieving intents that have explicit time frames. And in Jaques model, one can rank discretionary capacity in a tiered system. Level 1 encompasses jobs such as sales associates or line workers handling routine tasks with a time horizon of up to three months. Levels 2 to 4 encompass various managerial positions with time horizons between one to five years. Level 5 crosses over to five to 10 years and is the domain of small company CEOs and large company executive vice presidents. Beyond Level 5, one enters the realm of statesmen and legendary business leaders comfortable with innate time horizons of 20 years (Level 6), 50 years (Level 7) or beyond. Level 8 is the realm of 100 year thinkers like Henry Ford, while Level 9 is the domain of the Einsteins, Gandhis, and Galileos, individuals capable of setting grand tasks into motion that continue centuries into the future.
Jaques' ideas enjoyed currency into the 1970s and then fell into eclipse, assailed as unfair stereotyping or worse, a totalitarian stratification evocative of Huxley's Brave New World. It is now time to reexamine Jaques theories and revive time span of discretion as a tool for understanding our social structures and matching them to the overwhelming challenges facing global society. Perhaps problems like climate change are intractable because we have a political system that elects Level 2 thinkers to Congress when we really need Level 5s in office. As such, Jaques ideas might help us realize that the old saying, "he who thinks longest wins" is only half the story, and that the society in which everyone explicitly thinks about tasks in the context of time will be the most effective.
Associate Professor of Journalism, New York University...
There is a problem that anyone who has lived in New York City must wonder about: you can't get a cab from 4 to 5 pm. The reason for this is not a mystery: at a moment of peak demand, taxi drivers tend to change shifts. Too many cabs are headed to garages in Queens because when a taxi is operated by two drivers 24 hours a day, a fair division of shifts is to switch over at 5 pm. Now this is a problem for the city's Taxi and Limousine Commission, it may even be a hard one to solve, but it is not a wicked problem. For one thing, it's easy to describe, as I just showed you. That right there boots it from the category.
Among some social scientists, there is this term of art: wicked problems. We would be vastly better off if we understood what wicked problems are, and learned to distinguish between them and regular (or "tame") problems.
Wicked problems have these features: It is hard to say what the problem is, to define it clearly or to tell where it stops and starts. There is no "right" way to view the problem, no definitive formulation. The way it's framed will change what the solution appears to be. Someone can always say that the problem is just a symptom of another problem and that someone will not be wrong. There are many stakeholders, all with their own frames, which they tend to see as exclusively correct. Ask what the problem is and you will get a different answer from each. The problem is inter-connected to a lot of other problems; pulling them apart is almost impossible.
It gets worse. Every wicked problem is unique, so in a sense there is no prior art and solving one won't help you with the others. No one has "the right to be wrong," meaning enough legitimacy and stakeholder support to try stuff that will almost certainly fail, at first. Instead failure is savaged, and the trier is deemed unsuitable for another try. The problem keeps changing on us. It is never definitely resolved. Instead, we just run out of patience, or time, or money. It's not possible to understand the problem first, then solve it. Rather, attempts to solve it reveal further dimensions of the problem. (Which is the secret of success for people who are "good" at wicked problems.)
Know any problems like that? Sure you do. Probably the best example in our time is climate change. What could be more inter-connected than it? Someone can always say that climate change is just a symptom of another problem--our entire way of life, perhaps — and he or she would not be wrong. We've certainly never solved anything like it before. Stakeholders: everyone on the planet, every nation, every company.
When General Motors was about go bankrupt and throw tends of thousands of people out of work that was a big, honking problem, which rightly landed on the president's desk, but it was not a wicked one. Barack Obama's advisors could present him with a limited range of options; if he decided to take the political risk and save General Motors from collapse he could be reasonably certain that the recommended actions would work. If they didn't, he could try more drastic measures.
But health care reform wasn't like that at all. In the United States, rising health care costs are a classic case of a wicked problem. No "right" way to view it. Every solution comes with its own contestable frame. Multiple stakeholders who don't define the problem the same way. If the uninsured go down but costs go up, is that progress? We don't even know.
Still, we would be better off if we knew when we were dealing with a wicked problem, as opposed to the regular kind. If we could designate some problems as wicked we might realize that "normal" approaches to problem-solving don't work. We can't define the problem, evaluate possible solutions, pick the best one, hire the experts and implement. No matter how much we may want to follow a routine like that, it won't succeed. Institutions may require it, habit may favor it, the boss may order it, but wicked problems don't care.
Presidential debates that divided wicked from tame problems would be very different debates. Better, I think. Journalists who covered wicked problems differently than they covered normal problems would be smarter journalists. Institutions that knew when how to distinguish wicked problems from the other kind would eventually learn the limits of command and control.
Wicked problems demand people who are creative, pragmatic, flexible and collaborative. They never invest too much in their ideas because they know they are going to have to alter them. They know there's no right place to start so they simply start somewhere and see what happens. They accept the fact that they're more likely to understand the problem after its "solved" than before. They don't expect to get a good solution; they keep working until they've found something that's good enough. They're never convinced that they know enough to solve the problem, so they are constantly testing their ideas on different stakeholders.
Know any people like that? Maybe we can get them interested in health care...
Technology Came Before Humanity And, Evolutionarily, Paved The Way For It
Archaeologist, University of Bradford; Author, The Buried Soul...
The very idea of a "cognitive toolkit" is one of the most important items in our cognitive toolkit. It is far more than just a metaphor, for the relationship between actual physical tools and the way we think is profound and of immense antiquity.
Ideas such as evolution and a deep prehistory for humanity are as factually well-established as the idea of a round earth, or gravity as a force pulling apples from trees. Only bigots and the misled can doubt them. But the idea that the first chipped stone tool pre-dates, by at least half a million years, the expansion of mind that is so characteristic of humans, should also be knowable by all.
The idea that technology came before humanity and, evolutionarily, paved the way for it, is the scientific concept that I believe should be part of everybody's cognitive toolkit. We could then see that thinking through things and with things, and manipulating virtual things in our minds, is an essential part of critical self-consciousness. The ability to internalize our own creations, by abstracting them, and converting "out-there" tools into mental mechanisms, is what allows the entire scientific project.
Control Your Spotlight
Contributing Editor at Wired and the author of How We Decide and Proust Was a......
In the late 1960s, the psychologist Walter Mischel began a simple experiment with four-year old children. He invited the kids into a tiny room, containing a desk and a chair, and asked them to pick a treat from a tray of marshmallows, cookies, and pretzel sticks. Mischel then made the four-year olds an offer: they could either eat one treat right away or, if they were willing to wait while he stepped out for a few minutes, they could have two treats when he returned. Not surprisingly, nearly every kid chose to wait.
At the time, psychologists assumed that the ability to delay gratification — to get that second marshmallow or cookie — depended on willpower. Some people simply had more willpower than others, which allowed them to resist tempting sweets and save money for retirement.
However, after watching hundreds of kids participate in the marshmallow experiment, Mischel concluded that this standard model was wrong. He came to realize that willpower was inherently weak, and that children that tried to outlast the treat — gritting their teeth in the face of temptation — soon lost the battle, often within thirty seconds.
Instead, Mischel discovered something interesting when he studied the tiny percentage of kids who could successfully wait for the second treat. Without exception, these "high delayers" all relied on the same mental strategy: they found a way to keep themselves from thinking about the treat, directing their gaze away from the yummy marshmallow. Some covered their eyes or played hide-and-seek underneath the desk. Others sang songs from "Sesame Street," or repeatedly tied their shoelaces, or pretended to take a nap. Their desire wasn't defeated — it was merely forgotten.
Mischel refers to this skill as the "strategic allocation of attention," and he argues that it's the skill underlying self-control. Too often, we assume that willpower is about having strong moral fiber. But that's wrong — willpower is really about properly directing the spotlight of attention, learning how to control that short list of thoughts in working memory. It's about realizing that if we're thinking about the marshmallow we're going to eat it, which is why we need to look away.
What's interesting is that this cognitive skill isn't just a useful skill for dieters. Instead, it seems to be a core part of success in the real world. For instance, when Mischel followed up with the initial subjects 13 years later — they were now high school seniors — he realized that performance on the marshmallow task was highly predictive on a vast range of metrics. Those kids who struggled to wait at the age of four were also more likely to have behavioral problems, both in school and at home. They struggled in stressful situations, often had trouble paying attention, and found it difficult to maintain friendships. Most impressive, perhaps, were the academic numbers: The little kid who could wait fifteen minutes for their marshmallow had an S.A.T. score that was, on average, two hundred and ten points higher than that of the kid who could wait only thirty seconds.
These correlations demonstrate the importance of learning to strategically allocate our attention. When we properly control the spotlight, we can resist negative thoughts and dangerous temptations. We can walk away from fights and improve our odds against addiction. Our decisions are driven by the facts and feelings bouncing around the brain — the allocation of attention allows us to direct this haphazard process, as we consciously select the thoughts we want to think about.
Furthermore, this mental skill is only getting more valuable. We live, after all, in the age of information, which makes the ability to focus on the important information incredibly important. (Herbert Simon said it best: "A wealth of information creates a poverty of attention.") The brain is a bounded machine and the world is a confusing place, full of data and distractions — intelligence is the ability to parse the data so that it makes just a little bit more sense. Like willpower, this ability requires the strategic allocation of attention.
One final thought: In recent decades, psychology and neuroscience have severely eroded classical notions of free will. The unconscious mind, it turns out, is most of the mind. And yet, we can still control the spotlight of attention, focusing on those ideas that will help us succeed. In the end, this may be the only thing we can control. We don't have to look at the marshmallow.
Is a neurologist and neuroscientist originally from Italy...
Entanglement is "spooky action at a distance", as Einstein liked to say (he actually did not like it at all, but at some point he had to admit that it exists.) In quantum physics, two particles are entangled when a change in one particle is immediately associated with a change in the other particle. Here comes the spooky part: we can separate our "entangled buddies" as far as we can, they will still remain entangled. A change in one of them is instantly reflected in the other one, even though they are physically far apart (and I mean different countries!)
Entanglement feels like magic. It is really difficult to wrap our heads around it. And yet, entanglement is a real phenomenon, measurable and reproducible in the lab. And there is more. While for many years entanglement was thought to be a very delicate phenomenon, only observable in the infinitesimally small world of quantum physics ("oh good, our world is immune from that weird stuff") and quite volatile, recent evidence suggests that entanglement may be much more robust and even much more widespread than we initially thought. Photosynthesis may happen through entanglement, and recent brain data suggest that entanglement may play a role in coherent electrical activity of distant groups of neurons in the brain.
Entanglement is a good cognitive chunk because it challenges our cognitive intuitions. Our minds seem built to prefer relatively mechanic cause-and-effect stories as explanations of natural phenomena. And when we can't come up with one of those stories, then we tend to resort to irrational thinking, the kind of magic we feel when we think about entanglement. Entangled particles teach us that our beliefs of how the world works can seriously interfere with our understanding of it. But they also teach us that if we stick with the principles of good scientific practice, of observing, measuring, and then reproducing phenomena that we can frame in a theory (or that are predicted by a scientific theory), we can make sense of things. Even very weird things like entanglement.
Entanglement is also a good cognitive chunk because with its existence it implicitly whispers to us that seemingly self-evident cause-and-effect phenomena may not be cause-and-effect at all. The timetable of modern vaccination, probably the biggest accomplishment in modern medicine, coincides with the onset of symptoms of autism in children. This temporal correspondence may mislead us to think that the vaccination may have produced the symptoms, hence the condition of autism. At the same time, that temporal correspondence should make us suspicious of straightforward cause-and-effect associations, inviting us to have a second look, and to make controlled experiments to find out whether or not there is really a link between vaccines and autism. We now know there is no such link. Unfortunately though, this belief is very hard to eradicate and is producing in some parents the potentially disastrous decision of not vaccinating their children.
The story of entanglement is a great example of the capacity of the human mind of reaching out almost beyond itself. The key word here is "almost." Because we "got there", it is self-evident that we could "get there." But it didn't feel like it, did it? Until we managed to observe, measure, and reproduce that phenomenon predicted by quantum theory, it just felt a little "spooky." (It still feels a bit spooky, doesn't it?) Humans are naturally inclined to reject facts that do not fit their beliefs, and indeed when confronted with those facts, they tend to automatically reinforce their beliefs and brush those facts under the carpet. The beautiful story of entanglement reminds us that we can go "beyond ourselves," that we don't have to desperately cling to our beliefs, and that we can make sense of things. Even spooky ones.
While We Are Social Creatures, It's Often Best Not To Admit It
Founder of UserLand Software...
New York City, my new home, teaches you, that while we are social creatures, it's often best not to admit it.
As you weave among the obstacles on the sidewalks of Manhattan, it's easy to get distracted from your thoughts and pay attention to the people you're encountering. It's okay to do that if you're at a stop, but if you're in motion, if your eyes engage with another, that signals that you would like to negotiate.
Not good. A sign of weakness. Whether the oncoming traffic is aware or not, he or she will take advantage of this weakness and charge right into your path, all the while not making eye contact. There is no appeal. All you can do is shift out of their path, but even this won't avoid a collision because your adversary will unconsciously shift closer to you. Your weakness is attractive. Your space is up for grabs. At this point you have no choice but to collide, and in the etiquette of New York street walking you're responsible.
That's why the people who check their smartphones for text messages or emails while walking so totally command the sidewalks. They are heat-seeking missiles, and it's your heat they seek.
I don't think this is just New York, it's a feature of the human species. We seek companionship.
For a while in 2005 I lived on the beach in northeast Florida outside St Augustine. The beach is so long and relatively empty, they let you drive on the beach to find the perfect spot to bathe, and if you're willing to drive a bit you can be alone. So I would drive to a secluded spot, park my car and go out into the surf. When I came back, more often than not, there was a car parked right next to mine. They could have parked anywhere in a mile in either direction and had it all to themselves.
Add that to your cognitive toolkit!
Professor, Harvard University, Director, Personal Genome Project....
The names Lysenko and Lamarck are nearly synonymous with bad science — worse than merely mediocre science because of the huge political and economical consequences.
From 1927 to 1964 Lysenkov managed to keep the "theory of the inheritance of acquired characteristics" dogmatically directing Soviet agriculture and science. Andrei Sakharov and other Soviet physicists finally provoked the fall of this cabal in the 1960s, blaming it for the "shameful backwardness of Soviet biology and of genetics in particular … defamation, firing, arrest, even death, of many genuine scientists".
At the opposite (yet equally discredited) end of the genetic theory spectrum was the Galtonian eugenic movement, which from 1883 onward grew in popularity in many countries until the 1948 Universal Declaration of Human Rights, ("the most translated document in the world") stated that "Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family." Nevertheless, forced sterilizations persisted into the 1970s. The "shorthand abstraction" is that Lysenkoism overestimated the impact of environment and eugenics overestimated the role of genetics.
One form of scientific blindness occurs, as above, when a theory displays exceptional political or religious appeal. But another source of blindness arises when we rebound from catastrophic failures of pseudoscience (or science).
We might conclude from the two aforementioned genetic disasters that we only need to police abuses of our human germ-line inheritance. Combining the above with the ever-simmering debate on Darwin, we might develop a bias that human evolution has stopped or that "design" has no role.
But we are well into an unprecedented new phase of evolution in which we must generalize beyond our DNA-centric world-view. We now inherit acquired characteristics. We always have, but now this feature is dominant and exponential. We apply eugenics at the individual family level (where it is a right) not the governmental level (where it is a wrong). Moreover, we might aim for the same misguided targets that eugenics chose (i.e. uniformity around "ideal" traits), via training and medications.
Evolution has accelerated from geologic speed to internet speed — still employing random mutation and selection, but also using non-random intelligent design — which makes it even faster. We are losing species — not just by extinction, but by merger. There are no longer species barriers between humans, bacteria and plants — or even between humans and machines.
Short-hand abstractions are only one device that we employ to construct the "Flynn Effect". How many of us noticed the minor milestone when the SAT tests first permitted calculators? How many of us have participated in conversations semi-discreetly augmented by Google or text messaging? Even without invoking artificial intelligence, how far are we from commonplace augmentation of our decision-making the way we have augmented our math, memory, and muscles?
Einstein's Blade in Ockham's Razor
Doctorate in philosophy, a masters in image processing, a patent for interface......
In 1971, when I was a teenager, my father died in a big airplane crash. Somehow I began to turn 'serious', trying to understand the world around me and my place in it, looking for meaning and sense, beginning to realize: everything was very different than I had previously assumed in the innocence of childhood.
It was the beginning of my own "building a cognitive toolset" and I remember the pure joy of discovery, reading voraciously and — quite out of sync with friends and school — I devoured encyclopedias, philosophy, biographies and... science fiction.
One such story stayed with me and one paragraph within it especially:
"We need to make use of Thargola's Sword! The principle of Parsimony.
First put forth by the medieval philosopher Thargola14, who said,
'We must drive a sword through any hypothesis that is not strictly necessary"
That really made me think — and rethink again...
Finding out who this man might have been took quite a while, but it was also another beginning: a love affair with libraries, large tomes, dusty bindings... surfing knowledge, as it were.
And I did discover: there had been a monk named Guillelmi, from a hamlet surrounded by oaks, apocryphally called 'William of Ockham'. He crossed my path again years later when lecturing in Munich near Occam Street, realizing he had spent the last 20 years of his life there, under King Ludwig IV in the mid 1300s.
Isaac Asimov had pilfered, or let's say homaged, good old Guillelmi for what is now known in many variants as "Ockham's razor", such as
"Plurality should not be posited without necessity."
"Entities are not to be multiplied beyond necessity"
or more general and colloquial and a bit less transliterated from Latin:
A simpler explanation invoking fewer hypothetical constructs is preferrable.
And there it was, the dancing interplay between Simplex and Complex, which has fascinated me in so many forms ever since. For me, it is very near the center of "understanding the world", as our question posited.
Could it really be true, that the innocent sounding 'keep it simple' is really such an optimal strategy for dealing with questions large and small, scientific as well as personal? Surely, trying to eliminate superflous assumptions can be a useful tenet, and can be found from Sagan to Hawking as part of their approach to thinking in science. But something never quite felt right to me — intuitively it was clear that sometimesthings are just not simple — and that merely "the simplest" of all explanations cannot be taken as truth or proof.
Any detective story would pride itself in not using the most obvious explanation who did it or how it happened.
Designing a car to 'have the optimal feel going into a curve at high speed' will require hugely complex systems to finally arrive at "simply good".
Water running downhill will take a meandering path instead of the straight line.
Both are examples for a domain shift: the non-simple solution is still "the easiest" seen from another viewpoint: for the water the least energy used going down the shallowest slope is more important than taking the straightest line from A to B.
And that is one of the issues with Ockham:
The definition of what "simple" is — can already be anything but simple.
And what "simpler" is — well, it just doesn't get any simpler there.
There is that big difference between simple and simplistic.
And seen more abstractly, the principle of simple things leading to complexity dances in parallel and involved me deeply throughout my life.
In the early seventies I also began tinkering with the first large scale modular synthesizers, finding quickly how hard it is to recreate seemingly 'simple sounds'.
There was unexpected complexity in a single note struck on a piano that eluded even dozens of oscillators and filters, by magnitudes.
Lately one of many projects has been to revisit the aesthetic space of scientific visualizations, and another, which is the epitomy of mathematics made tangible: Fractals — which I had done almost 20 years ago with virtuoso coder Ben Weiss, now enjoying them via realtime flythroughs on a handheld little smartphone.
Here was the most extreme example: a tiny formula, barely one line on paper, used recursively iterated it yields worlds of complex images of amazing beauty.
(Ben had the distinct pleasure of showing Benoit Mandelbrot an alpha version at the last TED just months before his death)
My hesitation towards overuse of parsimony was expressed perfectly in the quote by Albert Einstein, arguably the counterpart "blade" to Ockham's razor:
"Things should be made as simple as possible — but not simpler"
And there we have the perfect application of its truth, used recursively on itself: Neither Einstein nor Ockham actually used the exact words as quoted!
After I sifted through dozens of books, his collected works and letters in German, the Einstein archives: nowhere there, nor in Britannica, Wikipedia or Wikiquote was anyone able to substantiate exact sources, and the same applies to Ockham. If anything can be found, it is earlier precedences...;)
Surely one can amass retweeted, reblogged and regurgitated instances for both very quickly — they have become memes, of course. One could also take the standpoint that in each case they certainly 'might' have said it 'just like that', since each used several expressions quite similar in form and spirit.
But just to attribute the exact words because they are kind of close would be, well..another case of: it is not that simple!
And there is a huge difference between additional and redundantinformation.
(Or else one could lose the second redundant "ein" in "Einstein" ?)
Linguistic jesting aside: Nonetheless, the Razor and the Blade constitute a very useful combination of approaching analytical thinking.
Shaving away non-essential conjectures is a good thing, a worthy inclusion in "everybody's toolkit" — and so is the corollary: not to overdo it!
And my own bottom line: There is nothing more complex than simplicity.
Kakonomics, or the strange preference for Low-quality outcomes
Institut Nicod, Paris; www.interdisciplines.org...
I think that an important concept to understand why does life suck so often is Kakonomics, or the weird preference for Low-quality payoffs.
Standard game-theoretical approaches posit that, whatever people are trading (ideas, services, or goods), each one wants to receive High-quality work from others. Let's stylize the situation so that goods can be exchanged only at two quality-levels: High and Low. Kakonomicsdescribes cases where people not only have standard preferences to receive a High-quality good and deliver a Low-quality one (the standard sucker's payoff) but they actually prefer to deliver a Low-quality good and receive a Low-quality one, that is, they connive on a Low-Lowexchange.
How can it ever be possible? And how can it be rational? Even when we are lazy, and prefer to deliver a Low-quality outcome (like prefer to write a piece for a mediocre journal provided that they do not ask one to do too much work), we still would have preferred to work less and receive more, that is deliver Low-quality and receive High-quality.Kakonomics is different: Here, we not only prefer to deliver a Low-quality good, but also, prefer to receive a Low-quality good in exchange!
Kakonomics is the strange — yet widespread — preference for mediocre exchanges insofar as nobody complains about. Kakonomic worlds are worlds in which people not only live with each other's laxness, but expect it: I trust you not to keep your promises in full because I want to be free not to keep mine and not to feel bad about it. What makes it an interesting and weird case is that, in all kakonomic exchanges, the two parties seem to have a double deal: an official pact in which both declare their intention to exchange at a High-quality level, and a tacit accord whereby discounts are not only allowed but expected. It becomes a form of tacit mutual connivance. Thus, nobody is free-riding:Kakonomics is regulated by a tacit social norm of discount on quality, a mutual acceptance for a mediocre outcome that satisfies both parties, as long as they go on saying publicly that the exchange is in fact at a High-quality level.
Take an example: A well-established best-seller author has to deliver his long overdue manuscript to his publisher. He has a large audience, and knows very well that people will buy his book just because of his name and anyway, the average reader doesn't read more than the first chapter. His publisher knows it as well…Thus, the author decides to deliver to the publisher the new manuscript with a stunning incipit and a mediocre plot (the Low-quality outcome): she is happy with it, congratulates him as she had received a masterpiece (the High-quality rhetoric) and they are both satisfied. The author's preference is not only to deliver a Low-quality work, but also that the publisher gives back the same, for example by avoiding to provide a too serious editing and going on publishing. They trust each other's untrustworthiness, and connive on a mutual advantageous Low outcome. Whenever there is a tacit deal to converge to Low-quality with mutual advantages, we are dealing with a case of Kakonomics.
Paradoxically, if one of the two parties delivers a High-quality outcome instead of the expected Low-quality one, the other party resents it as a breach of trust, even if he may not acknowledge it openly. In the example, the author may resent the publisher if she decides to deliver a High-quality editing. Her being trustworthy in this relation means to deliver Low-quality too. Contrary to the standard Prisoner Dilemma game, the willingness to repeat an interaction with someone is ensured if he or she delivers Low-quality too rather than High-quality.
Kakonomics is not always bad. Sometimes it allows a certain tacitly negotiated discount that makes life more relaxing for everybody. As one friend who was renovating a country house in Tuscany told me once: "Italian builders never deliver when they promise, but the good thing is they do not expect you to pay them when you promise either."
But the major problem of Kakonomics — that in ancient Greek means the economics of the worst — and the reason why it is a form of collective insanity so difficult to eradicate, is that each Low-quality exchange is a local equilibrium in which both parties are satisfied, but each of these exchanges erodes the overall system in the long run. So, the threat to good collective outcomes doesn't come only from free riders and predators, as mainstream social sciences teach us, but also from well-organized norms of Kakonomics that regulate exchanges for the worse. The cement of society is not just cooperation for the good: in order to understand why life sucks, we should look also at norms of cooperation for a local optimum and a common worse.
You can show something is definitely dangerous, but not definitely safe
Business Affairs Editor, The Economist; Author, The Edible History of the Humanity...
A wider understanding of the fact that you can't prove a negative would, in my view, do a great deal to upgrade the public debate around science and technology.
As a journalist I have lost count of the number of times that people have demanded that a particular technology be "proven to do no harm". This is, of course, impossible, in just the same way that proving that there are no black swans is impossible. You can look for a black swan (harm) in various ways, but if you fail to find one that does not mean that none exists: absence of evidence is not evidence of absence.
All you can do is look again for harm, in a different way. If you still fail to find it after looking in all the ways you can possibly think of, the question is still open: "lack of evidence of harm" means both "safe as far as we can tell" and "we still can't be sure if it's safe or not".
Scientists are often accused of logic-chopping when they point this out. But it would be immensely helpful to public discourse if there was a wider understanding that you can show something is definitely dangerous, but you cannot show it is definitely safe.
The Black Swan Technology
Co-founder of Daisy Systems and founding Chief Executive Officer of Sun Microsystems...
Think back to the world 10 years ago. Google had just gotten started; Facebook and Twitter didn't exist. There were no smart phones, no one remotely conceived of the possibility of the 100,000 iPhone apps that exist today. The few large impact technologies (versus slightly incremental advances in technologies) that occurred in the past 10 years were black swan technologies. In his book, Nassim Taleb defines a Black Swan as an event of low probability, extreme impact, and with only retrospective predictability. Black swans can be positive or negative in their impact and are found in every sector. Still, the most pressing reason I believe "black swan technology" is a conceptual tool that should be added to everyone's cognitive toolkit today is simply because the challenges of climate change and energy production we face today are too big to be tackled by known solutions and safe bets.
I recall fifteen years ago when we were starting Juniper networks, there was absolutely no interest in replacing traditional telecommunications infrastructure (ATM was the mantra) with Internet protocols. After all, there were hundreds of billions of dollars invested in the legacy infrastructure, and it looked as immovable as today's energy infrastructure. Conventional wisdom would say to make incremental improvement to maximize the potential of the existing infrastructure. The fundamental flaw in the conventional wisdom is the failure to acknowledge the possibility of a black swan. Improbable is not unimportant. I believe the likely future is not a traditional econometric forecast but rather one of today's improbable becoming tomorrow's conventional wisdom! Who would be crazy enough t forecast in 2000 that by 2010 almost twice as many people in India would have access to cell phones than latrines? Wireless phones were once only for the very rich. With a black swan technology shot you need not be constrained with the limits of the current infrastructure, projections or market. You simply change the assumptions.
Many argue that since we already have some alternative energy technology today, we should quickly deploy it. They fail to see the potential of the Black Swan technology possibilities; they discount them because they mistake improbable for unimportant and cannot imagine the art of the possible which technology enables. I believe doing this alone runs the risk of spending vast amounts of money on outdated conventional wisdom. Even more importantly, it won't solve the problems we face. Any time focused on short-term, incremental solutions will only distract from working on the homeruns that could change the assumptions around energy and society's resources. While there is no shortage of existing technology providing incremental improvements today — whether today's thin film solar cells, wind turbines, or lithium ion batteries — even summed, they are simply irrelevant to the scale of our problems. They may even make interesting and sometimes large businesses, but will not impact the prevailing energy and resource issues at scale. For that we must look for and invest in quantum jumps in technology with low probability of success; we must create in Black Swan technologies. We must enable the multiplication of resources that only technology can do.
So what are these next generation technologies, these black swan technologies of energy? These are risky investments that stand a high chance of failure, but enable larger technological leaps that promise earthshaking impact if successful: making solar power cheaper than coal or viable without subsidies, economically making lighting and air conditioning 80 percent more efficient. Consider 100 percent more efficient vehicle engines, ultra-cheap energy storage, and countless other technological leaps that we can't yet imagine. It's unlikely that any single shot works, of course. But even 10 Google-like disruptions out of 10,000 shots will completely upend conventional wisdom, econometric forecasts, and, most importantly, our energy future.
To do so we must reinvent the infrastructure of society by harnessing and motivating bright minds with a whole new set of future assumptions, asking "what could possibly be?" rather than "what is." We need to create a dynamic environment of creative contention and collective brilliance that will yield innovative ideas from across disciplines to allow innovation to triumph. We must encourage a social ecosystem that encourages taking risks on innovation. Popularization of the concept of the "Black Swan Technology," is essential to incorporate the right mindset into the minds of entrepreneurs, policymakers, investors and the public: that anything (maybe even everything) is possible. If we harness and motivate these bright new minds with the right market signals and encouragement, a whole new set of future assumptions, unimaginable today, will be tomorrow's conventional wisdom.
French social and cognitive scientist...
In 1967, Richard Dawkins introduced the idea of a meme: a unit of cultural transmission capable of replicating itself and of undergoing Darwinian selection. "Meme" has become a remarkably successful addition to everybody's cognitive toolkit. I want to suggest that the concept of a meme should be, if not replaced, at least supplemented with that of a cultural attractor.
The very success of the word "meme" is, or so it seems, an illustration of the idea of a meme: the word has now been used billions of time. But is the idea of a meme being replicated whenever the word is being used? Well, no. Not only do "memeticists" have many quite different definition of a meme, but also and more importantly most users of the term have no clear idea of what a meme might be. Each time, the term is being used with a vague meaning relevant in the circumstances. All these meanings overlap but they are not replications of one another. The idea of a meme, as opposed to the word "meme", may not be such a good example of a meme after all!
The case of the meme idea illustrates a general puzzle. Cultures do contain items — ideas, norms, tales, recipes, dances, rituals, tools, practices, and so on — that are produced again and again. These items remain self-similar over social space and time: in spite of variations, an Irish stew is an Irish stew, Little Red Riding Hood is Little Red Riding Hood and a samba is a samba. The obvious way to explain this stability at the macro level of the culture is, or so it seems, to assume fidelity at the micro level of interindividual transmission. Little Red Riding Hood must have been replicated faithfully enough most of the time for the tale to have remained self-similar over centuries of oral transmission or else the story would have drifted in all kinds of ways and the tale itself would have vanished like water in the sand. Macro stability implies micro fidelity. Right? Well, no. When we study micro processes of transmission — leaving aside those that use techniques of strict replication such as printing or internet forwarding — what we observe is a mix of preservation of the model and of construction of a version that suits the capacities and interests of the transmitter. From one version to the next, the changes may be small, but when they occur at the population scale, their cumulative effect should compromise the stability of cultural items. But — and here lies the puzzle — they don't. What, if not fidelity, explains stability?
Well, bits of culture — memes if you want to dilute the notion and call them that — remain self-similar not because they are replicated again and again but because variations that occur at almost every turn in their repeated transmission, rather than resulting in "random walks" drifting away in all directions from an initial model, tend to gravitate around cultural attractors. Ending Little Red Riding Hood when the wolf eats the child would make for a simpler story to remember, but a Happy Ending is too powerful a cultural attractor. If a person had only heard the story ending with the wolf's meal, my guess is that either she would not have retold it at all — and that is selection — , or she would have modified by reconstructing a happy ending — and this is attraction. Little Red Riding Hood has remained culturally stable not because it has been faithfully replicated all along, but because the variations present in all its versions have tended to cancel one another out.
Why should there be cultural attractors at all? Because there are in our minds, our bodies, and our environment biasing factors that affect the way we interpret and re-produce ideas and behaviors. (I write "re-produce" with a hyphen because, more often than not, we produce a new token of the same type without reproducing in the usual sense of copying some previous tokens.) When these biasing factors are shared in a population, cultural attractors emerge.
Here are a few rudimentary examples.
Rounded numbers are cultural attractors: they are easier to remember and provide better symbols for magnitudes. So, we celebrate twentieth wedding anniversaries, hundredth issue of journals, millionth copy sold of a record, and so on. This, in turn, creates a special cultural attractor for prices, just below rounded numbers — $9.99 or $9,990 are likely price tags — , so as to avoid the evocation of a higher magnitude.
In the diffusion of techniques and artifacts, efficiency is a powerful cultural attractor. Paleolithic hunters learning from their elders how to manufacture and use bows and arrows were aiming not so much at copying the elders than at becoming themselves as good as possible at shooting arrows. Much more than faithful replication, this attraction of efficiency when there aren't that many ways of being efficient, explains the cultural stability (and also the historical transformations) of various technical traditions.
In principle there should be no limit to the diversity of supernatural beings humans can imagine. However, as Pascal Boyer has argued, only a limited repertoire of such beings is exploited in human religions. Its members — ghosts, gods, ancestor spirits, dragons, and so on — have all in common two features. On the one hand, they each violate some major intuitive expectations about living beings: expectation of mortality, of belonging to one and only one species, of being limited in one's access to information, and so on. On the other hand, they satisfy all other intuitive expectations and are therefore, in spite of their supernaturalness, rather predictable. Why should this be so? Because being "minimally counterintuitive" (Boyer's phrase) makes for "relevant mysteries" (my phrase) and is a cultural attractor. Imaginary beings that are either less or more counterintuitive than that are forgotten or are transformed in the direction of this attractor.
And what is the attractor around which the "meme" meme gravitate? The meme idea — or rather a constellation of trivialized versions of it — has become an extraordinarily successful bit of contemporary culture not because it has been faithfully replicated again and again, but because our conversation often does revolve — and here is the cultural attractor — around remarkably successful bits of culture that, in the time of mass media and the internet, pop up more and more frequently and are indeed quite relevant to our understanding of the world we live in. They attract our attention even when — or, possibly, especially when — we don't understand that well what they are and how they come about. The meaning of "meme" has drifted from Dawkins precise scientific idea to a means to refer to these striking and puzzling objects.
This was my answer. Let me end by sharing a question (which time will answer): is the idea of a cultural attractor itself close enough to a cultural attractor for a version of it to become in turn a "meme"?
Personality traits are continuous with mental illnesses
Evolutionary Psychologist, University of New Mexico; Author, Spent: Sex,......
We like to draw clear lines between normal and abnormal behavior. It's reassuring, for those who think they're normal. But it's not accurate. Psychology, psychiatry, and behavior genetics are converging to show that there's no clear line between "normal variation" in human personality traits and "abnormal" mental illnesses. Our instinctive way of thinking about insanity — our intuitive psychiatry — is dead wrong.
To understand insanity, we have to understand personality. There's a scientific consensus that personality traits can be well-described by five main dimensions of variation. These "Big Five" personality traits are called openness, conscientiousness, extraversion, agreeableness, and emotional stability. The Big Five are all normally distributed in a bell curve, statistically independent of each other, genetically heritable, stable across the life-course, unconsciously judged when choosing mates or friends, and found in other species such as chimpanzees. They predict a wide range of behavior in school, work, marriage, parenting, crime, economics, and politics.
Mental disorders are often associated with maladaptive extremes of the Big Five traits. Over-conscientiousness predicts obsessive-compulsive disorder, whereas low conscientiousness predicts drug addiction and other "impulse control disorders". Low emotional stability predicts depression, anxiety, bipolar, borderline, and histrionic disorders. Low extraversion predicts avoidant and schizoid personality disorders. Low agreeableness predicts psychopathy and paranoid personality disorder. High openness is on a continuum with schizotypy and schizophrenia. Twin studies show that these links between personality traits and mental illnesses exist not just at the behavioral level, but at the genetic level. And parents who are somewhat extreme on a personality trait are much more likely to have a child with the associated mental illness.
One implication is that the "insane" are often just a bit more extreme in their personalities than whatever promotes success or contentment in modern societies — or more extreme than we're comfortable with. A less palatable implication is that we're all insane to some degree. All living humans have many mental disorders, mostly minor but some major, and these include not just classic psychiatric disorders like depression and schizophrenia, but diverse forms of stupidity, irrationality, immorality, impulsiveness, and alienation. As the new field of positive psychology acknowledges, we are all very far from optimal mental health, and we are all more or less crazy in many ways. Yet traditional psychiatry, like human intuition, resists calling anything a disorder if its prevalence is higher than about 10%.
The personality/insanity continuum is important in mental health policy and care. There are angry and unresolved debates over how to revise the 5th edition of psychiatry's core reference work, the Diagnostic and Statistic Manual of Mental Disorders (DSM-5), to be published in 2013. One problem is that American psychiatrists dominate the DSM-5 debates, and the American health insurance system demands discrete diagnoses of mental illnesses before patients are covered for psychiatric medications and therapies. Also, the U.S. Food and Drug Administration approves psychiatric medications only for discrete mental illnesses. These insurance and drug-approval issues push for definitions of mental illnesses to be artificially extreme, mutually exclusive, and based on simplistic checklists of symptoms. Insurers also want to save money, so they push for common personality variants — shyness, laziness, irritability, conservatism — not to be classed as illnesses worthy of care. But the science doesn't fit the insurance system's imperatives. It remains to be seen whether DSM-5 is written for the convenience of American insurers and FDA officials, or for international scientific accuracy.
Psychologists have shown that in many domains, our instinctive intuitions are fallible (though often adaptive). Our intuitive physics — ordinary concepts of time, space, gravity, and impetus — can't be reconciled with relativity, quantum mechanics, or cosmology. Our intuitive biology — ideas of species essences and teleological functions — can't be reconciled with evolution, population genetics, or adaptationism. Our intuitive morality — self-deceptive, nepotistic, clannish, anthropocentric, and punitive — can't be reconciled with any consistent set of moral values, whether Aristotelean, Kantian, or utilitarian. Apparently, our intuitive psychiatry has similar limits. The sooner we learn those limits, the better we'll be able to help people with serious mental illnesses, and the more humble we'll be about our own mental health.
Expert, Financial Derivatives and Risk; Author, Traders, Guns & Money: Knowns and......
Confluence of factors is highly influential in setting off changes in complex systems. A common example is in risk — the "Swiss cheese theory". Losses only occur if all controls fail — the holes in the Swiss cheese align.
Confluence, coincidence of events in a single setting, is well understood. Parallel developments, often in different settings or disciplines, can be influential in shaping events. A coincidence of similar logic and processes in seemingly unrelated activities provide indications of likely future developments and risks. The ability to better recognize"parallelism" would improve cognitive processes.
Economic forecasting is dismal, prompting John Kenneth Galbraith to remark that economists were only put on earth to make astrologers look good. Few economists anticipated the current financial problems, at least before they happened.
However, the art market proved remarkably accurate in anticipating developments, especially the market in the work of Damien Hirst — the best known of a group of artists dubbed yBAs (young British Artists).
Hirst's most iconic work — The Physical Impossibility of Death in the Mind of Someone Living — is a 14-foot (4.3 meter) tiger shark immersed in formaldehyde in a vitrine weighing over two tons. Charles Saatchi (the advertising guru) bought it for £50,000. In December 2004, Saatchi sold the work to Steve Cohen, founder and principal of the uber hedge fund — SAC Capital Advisers, which manages $20 billion. Cohen paid $12 million for The Physical Impossibility of Death in the Mind of Someone Living, although there are allegation that it was only $8 million.
In June 2007, Damien Hirst tried to sell a life size platinum cast of a human skull, encrusted with £15 million worth of 8,601 pave-set industrial diamonds, weighing 1,100 carats including a 52.4 carat pink diamond in the center of the forehead valued at £4 million. For the Love of God was a memento mori, in Latin remember you must die. The work was offered for sale at £50 million as part of Hirst's Beyond Belief show. In September 2007, For the Love of God was sold to Hirst and some investors for full price, for later resale.
The sale of The Physical Impossibility of Death in the Mind of Someone Living marked the final phase of the irresistible rise of markets. The failure of For the Love of God to sell marked its zenith as clearly as any economic marker.
Parallelism exposes common thought processes and similar valuation approaches to unrelated objects.
Hirst was the artist of choice for conspicuously consuming hedge fund managers, who were getting very rich managing money. Inflated prices suggested the presence of "irrational excess". The nature of sought after Hirst pieces and even their titles provided an insight into the hubristic self-image of financiers. With its jaws gaping, poised to swallow its prey,The Physical Impossibility of Death in the Mind of Someone Living mirrored the killer instincts of hedge funds, feared predators in financial markets. Cohen "… liked the whole fear factor."
The work of Japanese artist Takeshi Murakami provides confirmation. Inspired by Hokusai's famous 19th century woodblock print The Great Wave of Kanagawa, Murakami's 727 paintings showed Mr. DOB, a post nuclear Mickey Mouse character, as a god riding on a cloud or a shark surfing on a wave. The first 727 is owned by New York's Museum of Modern Art, the second by Steve Cohen.
Parallelism is also evident in the causes underlying several crises facing humanity. It is generally acknowledged that high levels of debt were a major factor in the ongoing global financial crisis. What is missed is that the logic of debt is similar to one underlying other problematic issues.
There is a striking similarity between the problems of the financial system, irreversible climate change and shortages of vital resources like oil, food and water. Economic growth and wealth was based on borrowed money. Debt allowed society to borrow from the future. It accelerated consumption, as debt is used to purchase something today against the uncertain promise of paying back the borrowing in the future. Society polluted the planet, creating changes in the environment which are difficult to reserve. Under-priced, natural finite resources were wantonly utilized, without proper concern about conservation.
In each area, society borrowed from and pushed problems into the future. Current growth and short-term profits were pursued at the expense of risks which were not evident immediately and that would emerge later.
To dismiss this as short-term thinking and greed is disingenuous. A crucial cognitive factor underlying the approach was a similar process of problem solving — borrowing from or pushing problems further into the future. This was consistently applied across different problems, without consideration of either it relevance, applicability or desirability. Where such parallelism exists, it feeds on itself, potentially leading to total systemic collapse.
Recognition and understanding of parallelism is one way to improve cognitive thinking. It may provide a better mechanism for predicting specific trends. It may also enable people to increase the dialectic richness, drawing on different disciplines. It requires overcoming highly segmented and narrow educational disciplines, rigid institutional structures and restricted approaches to analysis or problem solving.
Senior Consultant (and former Editor-in-Chief and Publishing Director of New......
Our species might well be renamed Homo Dilatus, the procrastinating ape. Somewhere in our evolution we acquired the brain circuitry to deal with sudden crises and respond with urgent action. Steady declines and slowly developing threats are quite different. "Why act now when the future is far off," is the maxim for a species designed to deal with near-term problems and not long term uncertainties. It's a handy view of humankind which everyone who uses science to change policy should keep in their mental took kit, and one that that is greatly reinforced by the endless procrastination in tacking climate change. Cancun follows Copenhagen follows Kyoto but the more we dither and no extraordinary disaster follows, the more dithering seems just fine.
Such behaviour is not unique to climate change. It took the sinking of the Titanic to put sufficient life boats on passenger ships, the huge spill from the Amoco Cadiz to set international marine pollution rules and the Exxon Valdez disaster to drive the switch to double-hulled tankers. The same pattern is seen in the oil industry, with the Gulf spill the latest chapter in the disaster first-regulations later mindset of Homo dilatus.
There are a million similar stories from human history. So many great powers and once dominant corporations slipped away as their fortunes declined without the crisis they needed to force change. Slow and steady change simply leads to habituation not action: you could walk in the British countryside now and hear only a fraction of the birdsong that would have delighted a Victorian poet but we simply cannot feel insidious loss. Only a present crisis wakes us.
So puzzling is our behaviour that the "psychology of climate change" has become a significant area of research, with efforts to find those vital messages that will turn our thinking towards the longer term and away from the concrete now. Sadly, the skull of Homo dilatus seems too thick for the tricks that are currently on offer. In the case of climate change, we might better focus on adaptation until a big crisis comes along to rivet our minds. The complete loss of the summer Arctic ice might be the first. A huge dome of shining ice, about half the size of the United States covers the top of the world in summer now. In a couple of decades it will likely be gone. Will millions of square kilometers of white ice turning to dark water feel like a crisis? If that doesn't do it then following soon after will likely be painful and persistent droughts across the United States, much of Africa, Southeast Asia and Australia.
Then the good side of Homo dilatus may finally surface. A crisis will hopefully bring out the Bruce Willis in all of us and with luck we'll find an unexpected way to right the world before the end of the reel. Then we'll no doubt put our feet up again.
Hunting for Root Cause: The Human Black Box
Professor of Translational Genomics, The Scripps Research Institute; Cardiologist,......
Root cause analysis is an attractive concept for certain matters in industry, engineering and quality control. A classic application is to determine why a plane crashed by finding the proverbial "black box" — the tamper-proof event data recorder. Even though this box is usually bright orange, the term symbolizes the sense of dark matter, a container with critical information to help illuminate what happened. Getting the black box audio recording is just one component of a root cause analysis for why a plane goes down.
Man is gradually being morphed into an event data recorder by virtue of each person's digital identity and presence on the web. Not only do we post our own data, sometimes unwittingly, but also others post information about us, and all of this is permanently archived. In that way it is close to tamper-proof. With increasing use of biosensors, high-resolution imaging (just think of our current cameras and video recording, no less digital medical imaging), and DNA sequencing, the human data event recorder will be progressively enriched with data and information.
In our busy, networked lives with constant communication, streaming and distraction, the general trend has moved away from acquiring deep knowledge for why something happened. This is best exemplified in health and medicine. Physicians rarely seek root cause. If a patient has a common condition such as high blood pressure, diabetes, or asthma, he or she is put on some prescription drugs without any attempt at ascertaining why the individual crashed — certainly a new, chronic medical condition can be likened to such an event. There are usually specific reasons for these disorders but they are not hunted down. Taken to an extreme, when an individual dies and the cause is not known it is now exceedingly rare that an autopsy is ever performed. Doctors have generally caved in their quest to define root cause, and they are fairly representative of most of us. Ironically, this is happening at a time when there is unprecedented capability for finding the explanation. But we're just too busy.
So to tweak our cognitive performance in the digital world where there is certainly no shortage of data, it's time to use it and understand, as fully as possible, why unexpected or unfavorable things happen. Or even why something great transpired. It's a prototypic scientific concept that has all too often been left untapped. Each person is emerging as an extraordinary event recorder and part of the Internet of all things. Let's go deep. Nothing unexplained these days should go without a hunt.
Books & Arts editor, New Scientist; founder and editor, CultureLab...
It is one of the stranger ideas to emerge from recent physics. Take two theories that describe utterly dissimilar worlds — worlds with different numbers of dimensions, different geometries of spacetime, different building blocks of matter. Twenty years ago, we'd say those are indisputably disparate and mutually exclusive worlds. Today, there's another option: two radically different theories might be dual to one another — that is, they might be two very different manifestations of the same underlying reality.
Dualities are as counterintuitive a notion as they come, but physics is riddled with them. When physicists looking to unite quantum theory with gravity found themselves with five very different but equally plausible string theories, it was an embarrassment of riches — everyone was hoping for one "theory of everything", not five. But duality proved to be the key ingredient. Remarkably, all five string theories turned out to be dual to one another, different expressions of a single underlying theory.
Perhaps the most radical incarnation of duality was discovered in 1997 by Juan Maldacena. Maldacena found that a version of string theory in a bizarrely shaped universe with five large dimensions is mathematically dual to an ordinary quantum theory of particles living on that universe's four-dimensional boundary. Previously, one could argue that the world was made up of particles or that the world was made up of strings. Duality transformed or into and — mutually exclusive hypotheses, both equally true.
In everyday language, duality means something very different. It is used to connote a stark dichotomy: male and female, east and west, light and darkness. Embracing the physicist's meaning of duality, however, can provide us with a powerful new metaphor, a one-stop shorthand for the idea that two very different things might be equally true. As our cultural discourse is becoming increasingly polarized, the notion of duality is both more foreign and more necessary than ever. If accessible in our daily cognitive toolkit, it could serve as a potent antidote to our typically Boolean, two-valued, zero-sum thinking — where statements are either true or false, answers are yes or no, and if I'm right, then you are wrong. With duality, there's a third option. Perhaps my argument is right and yours is wrong; perhaps your argument is right and mine is wrong; or, just maybe, our opposing arguments are dual to one another.
That's not to say that we ought to descend into some kind of relativism, or that there are no singular truths. It is to say, though, that truth is far more subtle than we once believed, and that it shows up in many guises. It is up to us to recognize it in all its varied forms.
Dinosaur paleontologist and science communicator; Author: Dinosaur Odyssey: Fossil......
Humanity's cognitive toolkit would greatly benefit from adoption of "interbeing," a concept that comes from Vietnamese Buddhist monk Thich Nhat Hanh. In his words:
"If you are a poet, you will see clearly that there is a cloud floating in [a] sheet of paper. Without a cloud, there will be no rain; without rain, the trees cannot grow; and without trees, we cannot make paper. The cloud is essential for the paper to exist. If the cloud is not here, the sheet of paper cannot be here either . . . "Interbeing" is a word that is not in the dictionary yet, but if we combine the prefix "inter-" with the verb to be," we have a new verb, inter-be. Without a cloud, we cannot have a paper, so we can say that the cloud and the sheet of paperinter-are. . . . "To be" is to inter-be. You cannot just be by yourself alone. You have to inter-be with every other thing. This sheet of paper is, because everything else is."
Depending on your perspective, the above passage may sound like profound wisdom or New Age mumbo-jumbo. I would like to propose that interbeing is a robust scientific fact — at least insomuch as such things exist — and, further, that this concept is exceptionally critical and timely.
Arguably the most cherished and deeply ingrained notion in the Western mindset is the separateness of our skin-encapsulated selves — the belief that we can be likened to isolated, static machines. Having externalized the world beyond our bodies, we are consumed with thoughts of furthering our own ends and protecting ourselves. Yet this deeply rooted notion of isolation is illusory, as evidenced by our constant exchange of matter and energy with the "outside" world. At what point did your last breath of air, sip of water, or bite of food cease to be part of the outside world and become you? Precisely when did your exhalations and wastes cease being you? Our skin is as much permeable membrane as barrier, so much so that, like a whirlpool, it is difficult to discern where "you" end and the remainder of the world begins. Energized by sunlight, life converts inanimate rock into nutrients, which then pass through plants, herbivores, and carnivores before being decomposed and returned to the inanimate Earth, beginning the cycle anew. Our internal metabolisms are intimately interwoven with this Earthly metabolism; one result is the replacement of every atom in our bodies every seven years or so.
You might counter with something like, "Ok, sure, everything changes over time. So what? At any given moment, you can still readily separate self from other."
Not quite. It turns out that "you" are not one life form — that is, one self — but many. Your mouth alone contains more than 700 distinctkinds of bacteria. Your skin and eyelashes are equally laden with microbes and your gut houses a similar bevy of bacterial sidekicks. Although this still leaves several bacteria-free regions in a healthy body — for example, brain, spinal cord, and blood stream — current estimates indicate that your physical self possesses about a trillion human cells and about 10 trillion bacterial cells. In other words, at any given moment, your body is about 90% nonhuman, home to many more life forms than the number of people presently living on Earth; more even than the number of stars in the Milky Way Galaxy! To make things more interesting still, microbiological research demonstrates that we are utterly dependent on this ever-changing bacterial parade for all kinds of "services," from keeping intruders at bay to converting food into useable nutrients.
So, if we continually exchange matter with the outside world, if our bodies are completely renewed every few years, and if each of us is a walking colony of trillions of largely symbiotic life forms, exactly what is this self that we view as separate? You are not an isolated being. Metaphorically, to follow current bias and think of your body as a machine is not only inaccurate but destructive. Each of us is far more akin to a whirlpool, a brief, ever-shifting concentration of energy in a vast river that's been flowing for billions of years. The dividing line between self and other is, in many respects, arbitrary; the "cut" can be made at many places, depending on the metaphor of self one adopts. We must learn to see ourselves not as isolated but as permeable and interwoven — selves within larger selves, including the species self (humanity) and the biospheric self (life). The interbeing perspective encourages us to view other life forms not as objects but subjects, fellow travelers in the current of this ancient river. On a still more profound level, it enables us to envision ourselves and other organisms not as static "things" at all, but as processes deeply and inextricably embedded in the background flow.
One of the greatest obstacles confronting science education is the fact that the bulk of the universe exists either at extremely large scales (e.g., planets, stars, and galaxies) or extremely small scales (e.g., atoms, genes, cells) well beyond the comprehension of our (unaided) senses. We evolved to sense only the middle ground, or "mesoworld," of animals, plants, and landscapes. Yet, just as we have learned to accept the non-intuitive, scientific insight that the Earth is not the center of the universe, so too must we now embrace the fact that we are not outside or above nature, but fully enmeshed within it. Interbeing, an expression of ancient wisdom backed by science, can help us comprehend this radical ecology, fostering a much-needed transformation in mindset.
Linguist, cultural commentator, is a Senior Fellow, Manhattan Institute....
In an ideal world all people would spontaneously understand that what political scientists call path dependence explains much more of how the world works than is apparent. Path dependence refers to the fact that often, something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice, because once established, external factors discouraged going into reverse to try other alternatives.
The paradigm example is the seemingly illogical arrangement of letters on typewriter keyboards. Why not just have the letters in alphabetical order, or arrange them so that the most frequently occurring ones are under the strongest fingers? In fact, the first typewriter tended to jam when typed on too quickly, so its inventor deliberately concocted an arrangement that put A under the ungainly little finger. In addition, the first row was provided with all of the letters in the word typewriter so that salesmen, new to typing, could wangle typing the word using just one row.
Quickly, however, mechanical improvements made faster typing possible, and new keyboards placing letters according to frequency were presented. But it was too late: there was no going back. By the 1890s typists across America were used to QWERTY keyboards, having learned to zip away on new versions of them that did not stick so easily, and retraining them would have been expensive and, ultimately, unnecessary. So QWERTY was passed down the generations, and even today we use the queer QWERTY configuration on computer keyboards where jamming is a mechanical impossibility.
The basic concept is simple, but in general estimation tends to be processed as the province of "cute" stories like the QWERTY one, rather than explaining a massive weight of scientific and historical processes. Instead, the natural tendency is to seek explanations for modern phenomena in present-day conditions.
One may assume that cats cover their waste out of fastidiousness, when the same creature will happily consume its own vomit and then jump on your lap. Cats do the burying as an instinct from their wild days when the burial helped avoid attracting predators, and there is no reason for them to evolve out of the trait now (to pet owners' relief). I have often wished there were a spontaneous impulse among more people to assume that path dependence-style explanations are as likely as jerry-rigged present-oriented ones.
For one, that the present is based on a dynamic mixture of extant and ancient conditions is simply more interesting than assuming that the present (mostly) all there is, with history as merely "the past," interesting only for seeing whether something that happened then could now happen again, which is different from path dependence.
For example, path dependence explains a great deal about language which is otherwise attributed to assorted just-so explanations. Much of the public embrace of the idea that one's language channels how one thinks is based on this kind of thing. Robert McCrum celebrates English as "efficient" in its paucity of suffixes of the kind that complexify most European languages. The idea is that this is rooted in something in its speakers' spirit, which would have propelled them to lead the world via exploration and the Industrial Revolution.
But English lost its suffixes starting in the eighth century, A.D. when Vikings invaded Britain and so many of them learned the language incompletely that children started speaking it that way. After that, you can't create gender and conjugation out of thin air — there's no going back until gradual morphing recreates such things over eons of time. That is, English's current streamlined syntax has nothing to do with any present-day condition of the spirit, nor with any even four centuries ago. The culprit is path dependence, as are most things about how a language is structured.
Or, we hear much lately about a crisis in general writing skills, suposedly due to email and texting. But there is a circularity here — why, precisely, could people not write emails and texts with the same "writerly" style that people used to couch letters in? Or, we hear of a vaguely defined effect of television, despite that kids were curled up endlessly in front of the tube starting in the fifties, long before the eighties when outcries of this kind first took on their current level of alarm in the report A Nation at Risk.
Once again, the presentist explanation does not cohere, whereas one based on an earlier historical development that there is no turning back from does. Public American English began a rapid shift from cosseted to less formal "spoken" style in the sixties, in the wake of cultural changes amidst the counterculture. This sentiment directly affected how language arts textbooks were composed, the extent to which any young person was exposed to an old-fashioned formal "speech," and attitudes towards the English language heritage in general. The result: a linguistic culture stressing the terse, demotic, and spontaneous. After just one generation minted in this context, there was no way to go back. Anyone who decided to communicate in the grandiloquent phraseology of yore would sound absurd and be denied influence or exposure. Path dependence, then, identifies this cultural shift as the cause of what dismays, delights, or just interests us in how English is currently used, and reveals television, email and other technologies as merely epiphenomenal.
Most of life looks path dependent to me. If I could create a national educational curriculum from scratch, I would include the concept as one taught to young people as early as possible.
E Pluribus Unum
Professor of computer science, Cornell University...
If you used a personal computer 25 years ago, everything you needed to worry about was taking place in the box in front of you. Today, the applications you use over the course of an hour are scattered across computers all over the world; for the most part, we've lost the ability to tell where our data sits at all. We invent terms to express this lost sense of direction: our messages, photos, and on-line profiles are all somewhere in "The Cloud".
The Cloud is not a single thing; what you think of as your Gmail account or Facebook profile is in fact made possible by the teamwork of a huge number of physically dispersed components — a distributed system, in the language of computer science. But we can think of it as a single thing, and this is the broader point: The ideas of distributed systems apply whenever we see many small things working independently but cooperatively to produce the illusion of a single unified experience. This effect takes place not just on the Internet, but in many other domains as well. Consider for example a large corporation that is able to release new products and make public announcements as though it were a single actor, when we know that at a more detailed level it consists of tens of thousands of employees. Or a massive ant colony engaged in coordinated exploration, or the neurons of your brain creating your experience of the present moment.
The challenge for a distributed system is to achieve this illusion of a single unified behavior in the face of so much underlying complexity. And this broad challenge, appropriately, is in fact composed of many smaller challenges in tension with each other.
One basic piece of the puzzle is the problem of consistency. Each component of a distributed system sees different things and has a limited ability to communicate with everyone else, so different parts of the system can develop views of the world that are mutually inconsistent. There are numerous examples of how this can lead to trouble, both in technological domains and beyond. Your handheld device doesn't sync with your e-mail, so you act without realizing that there's already been a reply to your message. Two people across the country both reserve seat 5F on the same flight at the same time. An executive in an organization "didn't get the memo" and so strays off-message. A platoon attacks too soon and alerts the enemy.
It is natural to try "fixing" these kinds of problems by enforcing a single global view of the world, and requiring all parts of the system to constantly refer to this global view before acting. But this undercuts many of the reasons why one uses a distributed system in the first place. It makes the component that provides the global view a massive bottleneck, and a highly dangerous single point of potential failure. The corporation doesn't function if the CEO has to sign off on every decision.
To get a more concrete sense for some of the underlying design issues, it helps to walk through an example in a little detail, a basic kind of situation in which we try to achieve a desired outcome with information and actions that are divided over multiple participants. The example is the problem of sharing information securely: imagine trying to back up a sensitive database on multiple computers, while protecting the data so that it can only be reconstructed if a majority of the backup computers cooperate. But since the question of secure information sharing ultimately has nothing specifically to do with computers or the Internet, let's formulate it instead using a story about a band of pirates and a buried treasure.
Suppose that an aging Pirate King knows the location of a secret treasure, and before retiring he intends to share the secret among his five shiftless sons. He wants them to be able to recover the treasure if three or more of them work together, but he also wants to prevent a "splinter group" of one or two from being able to get the treasure on their own. To do this, he plans to split the secret of the location into five "shares," giving one to each son, in such a way that he ensures the following condition. If at any point in the future, at least three of the sons pool their shares of the secret, then they will know enough to recover the treasure. But if only one or two pool their shares, they will not have enough information.
How to do this? It's not hard to invent ways of creating five clues so that all of them are necessary for finding the treasure. But this would require unanimity among the five sons before the treasure could be found. How can we do it so that cooperation among any three is enough, and cooperation among any two is insufficient?
Like many deep insights, the answer is easy to understand in retrospect. The Pirate King draws a secret circle on the globe (known only to himself) and tells his sons that he's buried the treasure at the exact southernmost point on this circle. He then tells each son a different point on this circle. Three points are enough to uniquely reconstruct a circle, so any three pirates can pool their information, identify the circle, and find the treasure. But for any two pirates, an infinity of circles pass through their two points, and they cannot know which is the one they need for recovering the secret. It's a powerful trick, and broadly applicable; in fact, versions of this secret-sharing scheme form a basic principle of modern data security, discovered by the cryptographer Adi Shamir, where arbitrary data can be encoded using points on a curve, and reconstructed from knowledge of other points on the same curve.
The literature on distributed systems is rich with ideas in this spirit. More generally, the principles of distributed systems give us a way to reason about the difficulties inherent in complex systems built from many interacting parts. And so to the extent that we sometimes are fortunate enough to get the impression of a unified Web, a unified global banking system, or a unified sensory experience, we should think about the myriad challenges involved in keeping these experiences whole.
Absence and Evidence
Archaeologist, Journalist; Author, Artifacts...
I first heard the words "absence of evidence is not evidence of absence" as a first-year archaeology undergraduate. I now know it was part of Carl Sagan's retort against evidence from ignorance, but at the time the non-ascribed quote was part of the intellectual toolkit offered by my professor to help us make sense of the process of excavation.
Philosophically this is a challenging concept, but at an archaeological site all became clear in the painstaking tasks of digging, brushing and trowelling. The concept was useful to remind us, as we scrutinised what was there, to take note of the possibility of what was not there. What we were finding, observing, and lifting, were the material remains, the artifacts which had survived, usually as a result of their material or the good fortune of their deposition. There were barely recordable traces of what was there — the charcoal layer of a prehistoric hearth for example — and others recovered in the washing, or the lab, but this was still tangible evidence. What the concept brought home to us was the invisible traces, the material which had gone from our reference point in time, but which still had a bearing in the context.
It was powerful stuff which stirred my imagination. I looked for more examples outside philosophy. I learned about the great near-Eastern archaeologist, Sir Leonard Woolley who, when excavating the 3rd millennium BC Mesopotamian palace at Ur, modern day Iraq. There, he conjured up musical instruments from their absence. The evidence was the holes left in the excavation layers, the ghosts of wooden objects which had long since disappeared into time. He used this absence to advantage by making casts of the holes and realising the instruments as reproductions. It struck me at the time that he was creating works of art. The absent lyres were installations which he rendered as interventions, and transformed into artifacts. More recently the British artist Rachel Whiteread has made her name through an understanding of the absent form, from the cast of a house to the undersides and spaces of domestic interiors.
Recognising the evidence of absence is not about forcing a shape on the intangible, but acknowledging a potency in the not-thereness. Taking the absence concept to be a positive idea, I suggest interesting things happen. For years middle-eastern archaeologists puzzled over the numerous, isolated bath-houses and other structures in the deserts of North Africa. Where was the evidence of habitation? The clue was in the absence: the buildings were used by nomads who left only camel prints in the sand. Their habitations were ephemeral, tents which, if not taken away with them, were of such material that they would too disappear into the sand. Observed again in this light, the ariel photos of desert ruins are hauntingly repopulated.
The absent evidence of ourselves is all around us, beyond the range of digital traces.
When my parents died and I inherited their house, the task of clearing their rooms was both emotional, and archaeological. The last mantelpiece in the sitting room had accreted over 35 years of married life, a midden of photos, ephemera, beach-combing trove and containers of odd buttons and old coins. I wondered what a stranger — maybe a forensic scientist, or traditional archaeologist — would make of this array if the narrative was woven simply from the tangible evidence. But as I took the assemblage apart in a charged moment, I felt there was a whole lot of no-thing which was coming away with it. Something invisible, and unquantifiable, which had been holding these objects in that context.
I recognised the feeling, and cast my memory back to my first archaeological excavation. It was of a long-limbed hound, one of those 'fine, hunting dogs' the classical writer, Strabo, described as being traded from ancient Britain into the Roman world. As I knelt in the 2000 year old grave, carefully removing each tiny bone, as if engaged with a sculptural process, I felt the presence of something absent. I could not quantify it, but it was that unseen 'evidence' which, it seemed, had given the dog its dog-ness.
The Lure Of A Good Story
Neuroscientist, Stanford University; Author, Monkeyluv...
Various concepts come to mind for inclusion in that cognitive toolkit. "Emergence," or related to that, "the failure of reductionism" — mistrust the idea that if you want to understand a complex phenomenon, the only tool of science to use is to break it into its component parts, study them individually in isolation, and then glue the itty-bitty little pieces back together. This often doesn't work and, increasingly, it seems like it doesn't work for the most interesting and important phenomena out there. To wit — you have a watch that doesn't run correctly and often, indeed, you can fix it by breaking it down to its component parts and finding the gear that has had a tooth break (actually, I haven't a clue if there is any clock on earth that still works this way). But if you have a cloud that doesn't rain, you don't break it down to its component parts. Ditto for a person whose mind doesn't work right. Or for going about understanding the problems of a society or ecosystem. So that was a scientific concept that was tempting to cite.
Related to that are terms like "synergy" and "interdisciplinary," but heaven save us from having to hear more about those words. There are now whole areas of science where you can't get a faculty position unless you work one of those words into the title of your job talk and have it tattooed on the small of your back.
Another useful scientific concept is "genetic vulnerability." This would be great if it found its way into everyone's cognitive toolkit because its evil cousins of genetic inevitability and genetic determinism are already deeply entrenched there, and with long long legacies of bad consequences. Everyone should be taught about work like that of Avshalom Caspi and colleagues, who looked at genetic polymorphisms related to various neurotransmitter systems that are associated with psychiatric disorders and anti-social behaviors. Ah ha, far too many people will say, drawing on that nearly useless, misshapen tool of genetic determinism, have one of those polymorphisms and you're hosed by inevitability. And instead, what those studies beautifully demonstrate is how these polymorphisms carry essentially zero increased risk of those disorders…..unless you grow up in particularly malign environments. Genetic determinism, my tuches.
But the scientific concept that I've chosen is one that is useful simply because it isn't a scientific concept, can be the antithesis of — "anecdotalism." Every good journalist knows its power — start an article with statistics about foreclosure rates or feature a family victimized by some bank? No brainer. Display maps showing the magnitudes of refugees flowing out of Darfur or the face of one starving orphan in a camp? Obvious choice. Galvanize the readership.
But anecdotalism is potentially a domain of distortion as well. Absorb the lessons of science and cut saturated fats from your diet, or cite the uncle of the spouse of a friend who eats nothing but pork rinds and is still pumping iron at age 110? Depend on one of the foundations of the 20th century's extension of life span and vaccinate your child, or obsess over a National Enquirer-esque horror story of one vaccination disaster and don't immunize? I shudder at the current potential for another case of anecdotalism — I write four days after the Arizona shooting of Gabby Giffords and 19 other people by Jared Loughner. As of this writing, experts such as the esteemed psychiatrist Fuller Torrey are guessing that Loughner is a paranoid schizophrenic. And if this is true, this anecdotalism will give new legs to the tragic misconception that the mentally ill are more dangerous than the rest of us.
So maybe when I say argue for "anecdotalism" going into everyone's cognitive toolkit, I am really arguing for two things to be incorporated — a) appreciation of how distortive it can be, and b) recognition, in a salute to the work of people like Tversky and Kahnemann, of its magnetic pull, its cognitive satisfaction. As a social primate complete with a region of the cortex specialized for face recognition, the individual face — whether literal or metaphorical — has a special power. But unappealing, unintuitive patterns of statistics and variation generally teach us much more.
Game of Life — And Looking For Generators
Philosopher; Professor, Oxford University; Director, Future of Humanity Institute;......
The Game of Life is a cellular automaton, invented by the British mathematician John Horton Conway in 1970.
Many will already be acquainted with Conway's invention. For those aren't, the best way to familiarize oneself with it is to experiment with one of the many free implementations that can be found on the Internet (or better yet, if you have at least rudimentary programming skills, make one yourself).
Basically, there is a grid and each cell can be in either of two states: dead or alive. One starts by seeding the grid with some initial distribution of live cells. Then one lets the system evolve according to three simple rules.
(Birth) A dead cell with exactly three live neighbours becomes a live cell.
(Survival) A live cell with two or three neighbours stays alive.
(Death) Any other cell dies or remains dead.
"Gosper's Glider Gun"
Why is this interesting? Certainly, the Game of Life is not biologically realistic. It doesn't do anything useful. It isn't even really game in the ordinary sense of the word.
But it's a brilliant demonstration platform for several important concepts — a virtual 'philosophy of science laboratory'. (The philosopher Daniel Dennett has expressed the view that it should be incumbent on every philosophy student to be acquainted with it.) It gives us the microcosm, simple enough that we can easily understand how things are happening, yet with sufficient generative power to produce interesting phenomena.
By playing with the Game of Life for an hour, one can develop an intuitive understanding of the following concepts and ideas:
• Emergent complexity — How complex patterns can arise from very simple rules.
• Basic dynamics concepts — such as the distinction between laws of nature and initial conditions.
• Levels of explanation — One quickly notices patterns arising that can be efficiently described in higher-level terms (such as "gliders", a specific kind of pattern that crawls across the screen) but that are quite cumbersome to describe in the language of the basic physics upon which the patterns supervene (i.e., in terms of individual pixels being alive or dead)
• Supervenience — This leads one to think about the relation between different sciences in the real world… Does chemistry, likewise, supervene on physics? Biology on chemistry? The mind on the brain?
• Concept formation, and carving nature at its joints — how and why we recognize certain types of pattern and give them names. For instance, in the Game of Life one distinguishes "still lives", small local patterns that are stable and unchanging; "oscillators", local patterns that perpetually cycle through a fixed sequence of states; "spaceships", patterns that move across the grid (such as gliders); "guns", stationary patterns that send out an incessant stream of spaceships; and "puffer trains", patterns that move themselves across the grid leaving debris behind. As one begins to form these and other concepts, the chaos on the screen gradually becomes more comprehensible. Developing concepts that carve nature at its joints is the first crucial step towards understanding, not only in the Game of Life but in science and in ordinary life as well.
At a more advanced level, one discovers that the Game of Life is Turing complete. That is, it's possible to build a pattern that acts like a universal Turing machine. Thus, any computable function could be implemented in the Game of Life — including perhaps a function that describes a universe like the one we inhabit. It's also possible to build a universal constructor in the Game of Life, a pattern which can build many types of complex objects, including copies of itself. Nonetheless, it seems that the structures that evolve into Game of Life are different from the ones who find in the real world: Game of Life structures tend to be very fragile in the sense that changing a single cell will often cause them to dissolve. It is interesting to try to figure out exactly what it is about the rules of the Game of Life and the laws of physics that govern our own universe that accounts for these differences.
Conway's Life is perhaps best viewed not as a single shorthand abstraction, but rather as a generator of such abstractions. We get a whole bunch of useful abstractions — or at least a recipe for how to generate them — all for the price of one.
And this, in fact, points us to one especially useful shorthand abstraction: the strategy of Looking for Generators. We confront many problems. We can try to solve them one by one. But alternatively, we can try to create a generator that produces solutions to multiple problems.
Consider, for example, the challenge of advancing scientific understanding. We might make progress by directly tackling some random scientific problem. But perhaps we can make more progress by Looking for Generators and focusing our efforts on certain subsets of scientific problems, namely those whose solutions would do most to facilitate the discovery of many other solutions. On this approach, we would pay most attention to innovations in methodology that can be widely applied; and to the development of scientific instruments that can enable many new experiments; and to improvements in institutional processes, such as peer review, that can make many decisions about whom to hire, fund, and promote more closely reflecting true merit.
In the same vein, we would be extremely interested in developing effective biomedical cognitive enhancers and other ways of improving the human thinker — the brain being, after all, the generator par excellence.