1-1-16 update
why create machines with human disabilities of intelligence?
scientists ignore hazard beacons posted by previous explorers!
this article about AI is non technical, but intended to illuminate a significant technical subject and ask some important questions along the way.
it has been painful to realize how poor we are who invest ourselves too heavily in intelligence and insufficiently in emotion, as scientists often do. now it appears we are headed toward a wall-e world in which emotion surrenders almost entirely to technology. science of the future seems to be ignoring warning beacons from the past.
there are now three obstacles thwarting technological solutions to problems. the first is the "theory of unintended consequences," which means that for every solution to a problem there arises at least one new problem or more than one which must be a little worse than the original problem. this is an easier way of paraphrasing entropy.
my favorite examples are described in the book "why things bite back." One is the use of padded gloves in boxing. the gloves are supposed to be protective. but as the author describes, this purpose is defeated. first, when a punch is thrown the fist rotates. the synthetic surface of the glove adds surface area and friction so that when a punch lands it rotates the head in a particular way that causes brain damage. this article asks the question, won't this problem of unintended consequences reach into information science and the development of AI?
today I read about the release of a new AI software for machine learning called tensorflow. I followed up with a reading of the code and math used in the development. the simulation of the human brain in math called "neural networks" is prominent in this freeware program. I also followed through some of the related articles on the application of statistics to information theory. my impression of the AI field has not improved much in 20 years, but during that time we've gone from mostly theory to an actual implementation of theory. for example, i can use voice recognition to produce most text now, but the error rate is not much better than it was 10 years ago, in part because"improvements" added new problems. attempts to use context are better but introduce larger problems to correct when proofing reading.
more importantly during the period that i was involved in computer science I also noticed that the most important problems facing humanity remain unsolved. and so my big question is, why are we calling it "intelligence" if it's an artificial version of what we already? people invented the concept of intelligence and applied to themselves. what could be more tautological?
the other two obstacles are goedel's incompleteness and heisenberg's uncertainty theorems. the titles will have to suffice for their contributions to this question. if we've just settled for the ordinary definition of intelligence then there will be short term practical uses for the technology, and long term consequences. that appears to be the direction. people are already contorting themselves into unhealthy habits to compensate for the shortcomings of technology. and the results inspired ergonomic furniture for chronic computer users, filters to protect eyes, and corrective glasses for eyes already damaged.
if human eyes can only see a small fraction of the full spectrum of "light" will we build this disability into our intelligent machines? radio telescopes can "see" more of the spectrum, but they code their pictures with "artificial color" because people cannot interpret the results otherwise; humans are unable to "relate" to the raw data. if we build "intelligence" will we include functions enabling the machines to "relate to humans" or will we need several types of intelligence in layers which can communicate with each other first? (this is already implemented and standardized). the first layer gathers raw data, the next "decides" what people can comprehend, and an interface layer communicates with people. this question is premature if we accept the usual definition of "intelligence," because "understanding natural language" is still listed among the unsolved problems of AI.
there are are unsolved problems with AI which are called "AI-hard." The implication is that the human definition of intelligence is accepted, and machines which can imitate this intelligence are the solution to the problem, a solution not yet discovered. presumably coders have evolution in mind as they build new machines, and they will eventually reach the precipice where some human beings now stand, and face the daunting limits and disabilities of "natural intelligence." or are people so determined to make a machine imitate a person that they lose sight of the facts that, with our great natural intelligence we are still not effective communicators. that natural language is riddled with problems resulting from natural lack of intelligence. perhaps the most important question is, if we have this chance to take a giant step forward in our evolution, why work so hard to build our assumptions about intelligence into machines and potentially replicate the problematic trajectory of humanity? is it possible that intelligence is over-valued?
this appears to lead to an increasingly mechanized view of life.
maybe the machines will become mirrors of our vanity.
another problem growing like kudzu is the useless excess of information in dbs. how many pages of search results can you scan compared with the total number of results listed? how much of the result set is influenced by advertising? is any of this really intelligent or is it just a mechanized version of what we had before? it appears that a lot of improvements to technology are not significant improvements. and important info and experience are lost in the massive data.
as a college student i could walk through the massive stacks of books at the library and discover topics i never knew existed. with google very often i cannot find something unless i already know what it is or its name. there are "threads" to be followed which lead from one topic to the next. if you didn't know what a pacinian corpuscle was before you started a wiki article about the sense of touch, you can click the link and follow that trail as far as desired. if i want to know if another scientist did this experiment to avoid repeating it, i can do so provided that the prior work was coded in the same keywords i use and that it's not on the 379th page of search results. google scholar is an attempt to separate knowledge from hype, but research articles contain no less commercial bias; the fact that research gets funded at all is an indication of a bias that is most often commercial.
medical technology is an area of applied AI where we can see that implementing human intelligence may not be ideal. if there are already too many people and resources are diminishing then why keep people alive artificially longer than beneficial?
this article has no ending because AI is a dynamic topic. so people say, but after more than 20 years of experience in this area i really don't see the miracle.
here is a chance to add a comment about why machines will never emulate human intelligence: human intel and emotion are intertwined because the desired outcome is survival and includes the components of individuals who actually differ substantially. as biology and machines merge in newer technologies we will see that what we once called a machine has something much more like a brain, and that we train our minds to act more like the machines we're trying to build. this is already happening and it's scary. the reason it's scary to me that i don't see the real benefit, and it looks like folly.