The study of artificial intelligence has a very particular rule for defining "intelligence." This definition was developed from thoughts of Turing and the Turing test for intelligence. A programmer might be able to program a system to be quite complex and hopefully to anticipate anything that might happen so the program can react correctly to "whatever". The AI rule, though, is that this is not intelligence. Even if the programmer anticipates everything that could happen (which actually is not possible and involves a program of almost infinite length) the resulting robot would not be intelligent by definition because there was no possibility for an intelligent reaction by the robot. That is, the robot must be able to encounter a situation for which it was not programmed and come up with a "good" response (the definition of "good" may vary). An intelligent robot must be able to self-organize.
Jerome Heath