Robotics
Robots are computer-controlled machine that is programmed to move, manipulates objects, and accomplishes work while interacting with its environment. Robots are able to perform repetitive tasks more quickly, cheaply, and accurately than humans. Any robot designed to move in an unstructured or unknown environment will require multiple sensors and controls, such as ultrasonic or infrared sensors, to avoid obstacles. Robots, such as the National Aeronautics and Space Administration (NASA) planetary rovers, require a multitude of sensors and powerful onboard computers to process the complex information that allows them mobility; and some Robots need only one sensor to get an input to start it work. This is particularly true for robots designed to work in close proximity with human beings, such as robots that assist persons with disabilities and robots that deliver meals in a hospital. Safety must be integral to the design of human service robots (Such as mechanical human arm).
Automated machines will increasingly assist humans in the manufacture of new products, the maintenance of the world's infrastructure, and the care of homes and businesses. Robots will be able to make new highways, construct steel frameworks of buildings, clean underground pipelines, deliver utensils on time and mow lawns. Prototypes of systems to perform all of these tasks already exist.
One important trend is the development of micro-electromechanical systems, ranging in size from centimeters to millimeters. These tiny robots may be used to move through blood vessels to deliver medicine or clean arterial blockages. They also may work inside large machines to diagnose impending mechanical problems.
Perhaps the most dramatic changes in future robots will arise from their increasing ability to reason. The field of artificial intelligence is moving rapidly from university laboratories to practical application in industry, and machines are being developed that can perform cognitive tasks, such as strategic planning and learning from experience. Increasingly, diagnosis of failures in aircraft or satellites, the management of a battlefield, or the control of a large factory will be performed by intelligent computers. Artificial Intelligence might catch most of human senses.
Artificial Intelligence or AI is a term that in its broadest sense would indicate the ability of an artifact to perform the same kinds of functions that characterize human thought (as a reaction to an environmental change - stimulus). The possibility of developing some such artifact has intrigued human beings since ancient times. With the growth of modern science, the search for AI has taken two major directions: psychological and physiological research into the nature of human thought, and the technological development of increasingly sophisticated computing systems and programming techniques.
In the latter sense, the term AI has been applied to computer systems and programs capable of performing tasks more complex than straightforward programming (i.e. in NASA's Pathfinder Roll Over Machine) , although still far from the realm of actual thought.
Many scientists remain doubtful that true AI can ever be developed. The operation of the human mind is still little understood, and computer design may remain essentially incapable of analogously duplicating those unknown, complex processes.
In robotics, three programming techniques used to program a robot.
1. Teach-and-Repeat
2. Programming languages
3. Off-line Programming from a CAD system (Ex. Anon, Bonney, Howie)
Implementing AI and Robotics and Future of the Programming Techniques
Low level programming paradigms evolved with binary code used with earlier computer systems. Later, with Assembly language evolution procedural, structured and many other techniques developed to make programming make easy, cheep and effective. Different programming languages developed using these techniques, and later same programming language was able to comply with different programming techniques. (Ex: Visual Basic supports Event Driven Programming, Structured as well as Object oriented techniques). Declarative programming paradigms opened a new way to develop software for Robots. In these languages the computer is told what the problem is, not how to solve the problem - the program is structured as a collection of properties to find in the expected result, not as a procedure to follow. Given a database or a set of rules, the computer tries to find a solution matching all the desired properties. The archetypical example of a declarative language is the fourth generation language SQL, as well as the family of functional languages and logic programming. The logic programming paradigm views computation as automated reasoning over a corpus of knowledge. Facts about the problem domain are expressed as logic formulas, and programs are executed by applying inference rules over them until an answer to the problem is found, or the collection of formulas is proved inconsistent.
Robot Programming is the act of sequencing robot motions and activities in an order that will accomplish a desired set of tasks and teaching the robot those motions or activities. The activities can include interpretation of sensor data, sending and recieving of signals and data from other devices or actuation of the joins and end-effector.
The robot programming can take place in two principally different ways: on-line or off-line. In On-line programming the use of the robot and equipment is required, whereas off-line programming is based on computer models of the production equipment.
There're two levels of language-based techniques; (a) Explicit robot programming languages and (b) task-level programming. The explicit robot programming languages are robot level languages and require the robot programmer to be comfortable in computer programming and in the design of sensor-based motion strategies. Thask level programming, on the other hand, allows the user to specify the effects of the actions on objects rather than the sequence of manipulator motions needed to achieve the task as is the case in explicit robot programming languages. Since task-level programming is object-oriented, not robot oriented, and deals with the effects on objects, not the motions of the robot
The development of robot programming concepts is almost as old as the development of robot manipulators itself. As the ultimate goal of industrial robotics has been the development of sophisticated production machines, tremendous efforts have been undertaken by the international robotics community to design user-friendly and at the same time powerful programming methods. The evolution reaches from early control concepts on the hardware level via point-to-point and simple motion level languages to motion-oriented structured robot programming languages.
As I stated in very first paragraph, a characteristic feature of robot programming is, that usually it is dealing with two different worlds, (1) The real physical world to be manipulated, and (2) abstract models representing this world in a functional or descriptive manner by programs and data. In the simplest case, these models are pure imagination of the programmers; in high level programming languages, e.g. it may consist of CAD data. In any case, commands based on some model are causing robots to change the state of the real world as well as the world model itself. During a sequence of actions both worlds have to be kept consistent to each other. This can be ensured by integrating internal robot sensors as well as external sensors like force/torque and vision sensors. Already in the early seventies research groups started to focus on so-called task-oriented robot programming languages. One of the first examples is IBM’s AUTOPASS system (Another system from IBM is EMILY), the RAPT language developed at the University of Edinburgh and the LAMA system proposed by MIT.
The basic idea behind these approaches is, to relieve the programmer from knowing all specific machine details and free him from coding every tiny motion/action; rather, he is specifying his application on a high abstraction level, telling the machine in an intuitive way what has to be done and not how this has to be done. This implicit programming concept implies many complex modules leading to automated robot programming. These commands have to be converted automatically into a sequence of actions/motions by a task planning system. Therefore, future programming techniques will use implicit more than explicit methodologies.
The development of modern motion-oriented robot programming languages started in the mid seventies. Languages like VAL and AML are examples of early structured robot programming languages, which already incorporate sophisticated data structures. As some robot vendor’s proprietary languages shipped today are still far behind these early developments, many research laboratories developed their own languages. Most of these languages are extensions of widespread programming languages like C and C++ or Java. (E.g., object oriented robot programming language ZERO++ uses C and C++.) Motion-oriented robot programming languages nowadays are indispensable in industrial robot applications; in research they often constitute the basis of higher level robot programming concepts. MRROC++ is another robotic programming language.
One of the essential ingredients of modern robot programming languages is the thorough usage of the frame concept. I.e., all robots pose and object locations as well as motions are expressed in accordance with human spatial intuition in terms of Cartesian coordinates.
Usually, advanced applications are heavily dependent on sensor integration of internal as well as external sensors, like force/torque and vision sensors.
Future programming techniques might need;
1. Support for all the common things people are programming, at language or standard library level. That means direct support for: HTTP, TCP/IP, XML, Unicode, i18n. I wouldn't be surprised if it even had AJAX.
2. Simple paradigm, coding and should provide state-of-art debugging facilities
3. higher level abstractions
4. Cooperate with the outside world
5. More real testing mechanisms and Verification of program through simulation and visualization
6. Support for Distributed Processing
7. Precious Methods and snippets library (As in Java, PHP and Joomla)
8. Security
9. Better compilation/interpretation and Effective Performance og program logics and calculations
10. Dynamically typed and completely object oriented
11. Reasonably familiar syntax and be based on principle of least surprise
12. implementation-defined and will evolve
13. easy to code interactively and IDE-friendly
14. Have macro support
15. Support program documenting and programmer accessories
16. Developed threading, inter-process communication (Ex.: Erlang)
17. Reuse of existing CAD data.
18. Cost independent of production. Production can continue while programming.
19. Time Saving and Cost Efficient
Object orientation is a good programming metaphor because it accords with our experience of reality. Because of this it is easy to visualize programs in terms of objects interacting. It uses patterns of thinking that we are familiar with and are ingrained in our consciousness. Following new paradigms will certainly have something taken from OOP.
Current implementations on programming techniques include:
1. Language Oriented Programming (started in 2004)( http://rechtman.com/oop.htm)
2. Agent Oriented Programming (started in 1990)
3. Super Server Paradigm (Started in 2006) Also known as "Programming without coding (PWCT)" It is a Code-Free Programming Gateway includes Mahmoud Programming Language, RPWI Environment & DoubleS (Super Server) Programming Paradigm. (http://sourceforge.net/projects/doublesvsoop/ | http://doublesvsoop.sourceforge.net/)
5. Object prototype based (http://isaacproject.u-strasbg.fr/)(
With prototype-based programming you create only objects. If you want more of that type of object you “clone” it. If you want to add or change functionality of an object, you “clone” the object, and change what you want in the new “cloned” object. Objects don’t have pointers to classes in pointer based languages, they just point to another object. So you may have this sort of tree structure where every object points to another one, all the way up the tree until the root object. http://coweb.cc.gatech.edu/cs2340/4288)
6. De Jour - This packages data together with code relevant to the data type. So you get an object, and it can interact with other objects based on their properties, attributes and the interface they present.
The programming environment is also changing. The two major changes I see are :
- Distributed processing where the platform a program may be running on is no longer a single machine or processor
- Virtual environments where a program is an object in a 3D virtual world
Resources:
- Robots and automated manufacture (By John Billingsley, Institution of Electrical Engineers)
- Theory of automatic robot assembly and programming (By Bartholomew O. Nnaji)
- Robot technology and applications(By Ulrich Rembold)
- Microsoft Encarta 2007
- http://www.voidspace.org.uk/python/articles/object_shaped_future.shtml#id8
- http://prawfsblawg.blogs.com/prawfsblawg/2005/08/the_future_of_c.html
- http://www.cs.chalmers.se/~oloft/Papers/wm96/wm96.html
- http://www.everydayspace.org/2008/02/04/the-future-of-computer-programming/
- http://www.camelot.dk/Offvson.aspx
- bs.de/rob/literatur/download/uth_2002_05_workshop_sfb_b4.pdf
- http://en.wikipedia.org/wiki/Programming_paradigm