Response to Chapter 5 of Braitenberg's "Vehicles"

Very often one will stop and gaze at a machine, wondering how it carries out its task. Observing its movements in awe, they will most likely fail in discovering the inner processes that allow it to operate. This is not because the machine is too complicated for people to understand; there is a psychological explanation.  If something acts in a manner that seems complex, then one will naturally believe that there are complicated processes going on that produce the actions.  Braitenberg explains the psychology behind this situation in his “law of uphill analysis and downhill invention”.
            Valentino Braitenberg talks about what he calls the psychological consequence of the “law of uphill analysis and downhill invention" in chapter 5 of the reference. The law of uphill analysis and downhill invention vividly represents a circumstance that machines are easy to understand if you create them; otherwise it's tough to understand the inner workings from an outside view.  Of course, the viewer must lack the general knowledge of how to create a robot, or else this law would not apply.  As an example of our robot, the robot engineers of our group do not understand how our robot thinks and finds the gold, which means they do not know how the algorithm works.  Even if one were to observe our robot run over a long period of time, it's nearly impossible to figure out how it is thinking if you are not the programmer.   However, the programmers can easily predict which direction the robot will move. The programmers can go over the code and draw the correct path it follows. That's the reason why we can debug it before we test it on the real board.  Therefore, we can conclude from this observation that Braitenberg’s law applies.  The robot engineers look at the robot’s navigation through the maze in wonder, while the programmers understand exactly how the robot is thinking.
            Braitenberg’s law can also be applied across a broad spectrum of ideas and problems. According to Braitenberg’s “law of uphill analysis and downhill invention”, the cognitive activities performed by humans can be explained by the workings of the simplest components of the brain.  As a group, we agree with Braitenberg’s view when applied to the brain.  We believe that all of the cognitive processes performed by humans are a direct result of the processes carried out by neurons.  This, of course, is from a materialist point of view.  If one does not believe that all human processes can be explained by our material being, then Braitenberg’s law cannot be applied. 
            A counterargument to Braitenberg’s view could be made from a dualist’s point of view.  A dualist would argue that the human body and the human mind are two separate entities.  Therefore, Braitenberg’s law cannot be applied to the mind because it is not physical matter that can be broken down into simple components. A dualist would describe the mind as something that is neither tangible nor fully understandable, it simply exists
.  In this case, Braitenberg would not (with our current understanding) be able to answer this objection because a dualist mind cannot be compared to a robot made of simple components.  We do not understand the substance that a dualist mind is made of. 
            From the view of an idealist, matter exists only as ideas in the mind.  Matter and its simplest components would therefore be a creation of the mind, which of course cannot be analyzed to explain our cognitive processes. Now how would Braitenberg be able to answer such an objection? The answer is simply this: He would not.  Braitenberg’s law is deeply rooted in materialist beliefs.  Idealism and materialism are complete opposites, but neither can be proven more so than the other.  It is a stalemate.
            In conclusion, it is reasonable to apply Braitenberg’s “law of uphill analysis and downhill invention” to the internal processes of machines, however when applied to the cognitive processes of the mind, we encounter problems. One may therefore pose the question will machine intelligence ever be comparable to human intelligence?
And of course to tackle this inquiry, we would first have to define human intelligence, now encountering the mind-body problem. However, if the mind did consist entirely of physical matter and could be broken down into simple processes, the question we would now pose is when will machines think like us?


Comments