History of Automation


Dante II

summary by Russell D.

Dante II is an 8 legged robot whose purpose is to explore remote areas in a combination of supervised autonomous exploration as well as remote controlled teleoperation. Dante’s Unique 8 legged sub-frame has a sliding system, and was proven capable in rugged terrain. However, this system doesn’t work in every situation, so a rappelling system was added to the robot, similar to that of a rock climber. This robot also helped find the limitations of a tether, but still completed its mission. An on-board winch spooled ad unspooled the robots tether as needed, and the tether was reinforced with strengthening fibers. This allowed a dual use of the tether. It could be used for power delivery from an external source, data gathering, and video telemetry from the on-board cameras.

Dante II’s mission was to rappel to the bottom of Mount Spurr’s crater, an active volcano in Alaska. Dante would then collect scientific data from the multiple sensors on-board, then scale the volcano's crater once again to return to its starting destination. Dante completed this mission without physical human presence, although at times the robot was controlled remotely by human operators.

Dante II was used in 2017 to collect scientific data from an active volcano that was too dangerous for human scientists to gather. However, this robot was just a test to iron out flaws in remote operation of vehicles. Similar robots will be used to explore other planets in the future, and those robots will be more automated. NASA funded the project, and will likely use aspects learned from Dante II and other similar autonomous tests to explore Mars in more depth, and to ensure the safety of any astronauts and possible colonies on mars or any other planets in the future.

Stanford Arm

summary by Ezekiel K.

This robot arm was designed in 1969 by Victor Scheinman, a Mechanical Engineering student working in the Stanford Artificial Intelligence Lab (SAIL). This 6 degree of freedom (6-dof) all-electric mechanical manipulator was one of the first "robots" designed exclusively for computer control. Following experience with a couple of earlier manipulators, the Stanford-Rancho Arm (a modified prosthetic arm) and the Stanford Hydraulic Arm (a high speed but dangerous and difficult to control manipulator), this arm was designed to be easy to control and compatible with the existing computer systems (PDP-6) and the SAIL facility. This arm was entirely built on campus, primarily using shop facilities in the Chemistry Department.

The kinematic configuration of the arm is non-anthropomorphic (not humanoid) with 6 joints (5 revolute, 1 prismatic) and links configured such that the mathematical computations (arm solutions) were simplified to speed up computations. Brakes were used on all joints to hold the arm in position while the computer computed the next trajectory or attended to other timeshared activities. Drives are DC electric motors, Harmonic Drive and spur gear reducers, potentiometers for position feedback, analog tachometers for velocity feedback and electro-mechanical brakes for locking joints. Slip clutches were also used to prevent drive damage in the event of a collision. Other enhancements include a servoed, proportional electric gripper with tactile sense contacts on the fingers, and a 6 axis force/torque sensor in the wrist.

This robot arm was one of two mounted on a large table with computer interfaced video (vidicon) cameras and other special tools and tooling.The facility was used by students and researchers for over 20 years for Hand-Eye projects and for teaching purposes, as it was well characterized, reliable and easily maintained. Eventually it was augmented with commercial electric robots and newer Stanford designs, but the Blue arm, nearly identical is still in occasional use in the Robotics laboratory on this floor.

Some representative projects included assembly of a Model A Ford water pump, partial assembly of a chain saw and solving Instant Insanity colored cube puzzles. These tasks all involved combinations of computer based modeling, planning, object recognition, vision, tactile and force sensing, collision avoidance, control and manipulation. Physical manipulation based tasks require close attention to issues of sequence, process, coordination, support, accuracy, contact and interference.

ASIMO

summary by: Grace F.

ASIMO was the dream of engineers at the vehicle franchise of Honda, but, they couldn’t put it into motion until 1986. Then, they were able to research and create the 4.25 foot robot, ASIMO. ASIMO actually means Advanced Step in Innovation Mobility, and that’s exactly what happened! The engineers had to go through several prototypes, such as E1, E2, and E3, which were mainly focused on developing a system to simulate the way in which humans walked. It was based on several people walking up and down stairs, and walking on regular, flat land. Next were the prototypes to focus on stabilization and the stair climbing aspect of the robot, which it’s prototypes were called E4, E5, and E6. Of course, once they aced the technique of the balancing and walking, they had to make the robot more friendly looking, as they aimed to have it in society. To do so, they needed to add the head, arms, and torso, but, the first version was ridiculously tall. Coming in at 6’2” and 398 lbs, the thing was giant, and most likely intimidating to those who look up to it. The engineers at Honda needed to come up with a way to make the design less intimidating and more friendly, so P2 was more compact and the abilities to walk, climb, and move were improved. Still, it was not enough, so P3, the last prototype, was made to be 5’2” and 287 lbs. As 2 decades have passed with this incredible humanoid robot, ASIMO’s range of movement and abilities have made it so it can run or walk on flat ground or uneven slopes, turn without any rigidness, and even reach for items. Even though this little guy can already do so much, Honda reaches for the stars and hopes that along the way, they can inspire young children to create their own best friend.

ASIMO is the “first” humanoid robot created by the vehicle franchise, Honda, and means Advanced Step In Innovation Mobility. . They set out to create the robot that can function in society and be the people’s partner. The whole idea was started as a new project to improve mobility and movement. The 4.25 foot robot was designed by keeping in mind how we humans move. It’s legs were based off of human’s joint functions such as walking up and down stairs, on flat ground, and uneven ground, then the research was applied to ASIMO and enabled the robot to have the same movements as humans.



The History of Automation:

Artificial Intelligence

by: Riley F.

Introduction:

Defining Artificial Intelligence is rather simple, as we know it; it is an extremely advanced technology and is a rather modern form of Robotics, and Automation. The idea behind Artificial intelligence is to simulate the mind and intelligence of a living being, and with the eventual goal to simulate the consciousness of a Human being. The concept of A.I. itself was initially conceived by Classical Philosophers in the 1900’s in order to best describe the likes of human consciousness. Upon its initial creation ‘A.I.’ was nothing more than the realization that with a system of continually interchanging 1’s and 0’ could be used in order to recreate the most complex of systems one could ever desire to simulate. As the dreams of Philosophers ran wild with this principle, eventually the thought and idea that perhaps a Human mind could be simulated was conceived. Over the years this idea would eventually branch outward and expand into something which seemed achievable. Some of the most major branches of Artificial Intelligence being the likes of Self Evolving Programming, Machine Learning, Evolutionary Algorithms, Genetic Algorithms, and Genetic Programming. Over the years since the idea of simulating a human being had been conceived there have been dozens if not hundreds of forms of media composed and created exploring this topic and idea into much further depth.


Machine Learning:

Existing forms of (or at least the most common forms of) artificial intelligence use a very specific form of programming in order to function to their best and fullest capabilities. This type of programming is known as machine learning. It effectively dichotomizes human intelligence in such a fashion that it is broken down into several categories. Those of Reasoning, Knowledge representation, Planning, Learning, Natural language, Processing, Perception, and Manipulation. Reasoning; being how the system will follow patterns, and deduction. Knowledge representation; being both how Information/knowledge is processed and presented to the system. Planning; being how the system intends to act. Learning; being how the system will not repeat mistakes. Natural language; being how the robot will perceive, receive, process, dichotomize, and build language constructs in order to speak a language as perfectly as possible, Processing; being a much broader area of Machine learning which allows the system to effectively combine all of the aforementioned steps, and conjure solutions and representations for them. Perception; being how the system will see information being presented to it. And lastly Manipulation which is how the system will interact with both itself and the world around it.


Using these branches of Machine learning the system can combine them and use them in order to effectively, naturally, precisely, and accurately depict, predict, respond, receive, perceive, and interact with the world around it. However aside from these branches and principles of Machine learning there are a few other bases that need to be touched upon.

Visualization of how the brain of an A.I. would think and make decisions.

Artificial Brains:


Suffice to say that the brain of an A.I. is incredibly complex. The model shown above depicts ~1916640 different variations and possibilities. If you were to square the number of calculations pers second that can occur within this system you would get ~36,735,089,000,000,000,000 possible combinations/calculations per electric pulse. For reference the human brain is capable of achieving ~100,000,000,000,000 possible combinations/calculations per electric pulse. However there is an incredible amount of variation between the Brains of different A.I.’s. Some have 2 combinations, others have 5, some have 5,000, and so on and so forwarth.


Existing forms of A.I. are certainly impressive, however come nothing close to the incredible works of science fictions that have been imagined. For example; the concept of the Jupiter Brain, a Super Computer-Super Intelligence magnum opus of Titanic computing power the size of Jupiter. This concept has been explored quite a few times by not only Science fiction authors, but by very bored mathematicians as well.


Conclusion:


In Conclusion, To be Certain A.I. 's are an incredible technology, and have come a long way from where they once began. However in addition we have a long way to go before reaching the truly incredible concepts which we have invented with our imaginations in the wake of science fiction. However as Science fiction and reality blur together many have begun to realize that if we are not careful with the way we act upon these technologies; that it shall be our undoing as humanity, and a species. However if we are able to remain vigilant and persevere, then A.I. will truly be one of if not the most prestigious and advantageous inventions in the history of Mankind.



3.1.1 History Of Automation

The International Space Station

by: Logan K.

On November 20, 1998 a Russian Proton Rocket launched from the Baikonur Cosmodrome carrying Zarya; the first module of the International Space Station. A total of 15 nations are a part of the ISS, and has grown to the size of a football field, with more modules planned for the station. For the last 20 years, the ISS has been continuously manned and has performed many vital missions for the advancement of the human race. At 220 miles above us, and over 17,000 mph, the space station includes many automations as well.


With the ISS being in such a harsh environment, the use of automated systems is crucial. An automated system is, “composed of elements designed to perform a set of tasks that have been programmed. Operational and repetitive tasks become less of a burden and makes your life simpler and easier,” according to Upland. These systems make it possible for humans to survive and work in the vacuum of space, and perform experiments that humans may not be able to for whatever the reason may be. Some of the automated systems on the ISS are automated systems such as arms that grab capsules and or used for scientific purposes, and other systems that run experiments or allow the astronauts to survive in space. It began to impact people from the beginning of its existence on what it was now able to accomplish from the previous space stations. The main contributors to the ISS are NASA and the Russian Space Agency, but along with many others. The ISS is a very important part of our history with space and continues to do great things, and without the use of automation, it wouldn’t have been possible.



https://en.wikipedia.org/wiki/International_Space_Station

https://www.issnationallab.org/about/iss-timeline/

https://www.nasa.gov/mission_pages/station/structure/iss_assembly.html



history of Automation

The Stanford Cart

by: Gabe D.

The Stanford Cart was a remote controlled TV robot. The robot was programmed to navigate its surroundings by using an onboard TV system that would scan objects around the robot. This robot was programmed in a way that it learned as it progressed. This method helped the robot move more efficiently and safely around it’s environment as it would learn things as it progressed.

There is a similar robot called the CMU rover which is more capable than it’s counterpart and closer to being operational but the Stanford Cart is where it starts. The Stanford Cart is effective at scanning its surroundings however it is slow and only effective for short runs. The cart would move about 1 meter every 10 to 15 minutes. It successfully planned and executed several 20 m courses in about 5 hours each. It failed on quite a few different fronts but it is the first few steps in the right direction.


The Stanford Cart Revised-

The Stanford Cart was a robot that could “see” the world around it and drive around in it. The robot runs a complicated computer program that drives the robot around by scanning it’s surroundings with an on board TV system. The Stanford Cart is not the only one of it’s kind however, there is a more advanced version of this robot called the CMU rover. This CMU rover is closer to being fully operational and works faster than its predecessor but The Stanford Cart is the first of its kind. The CMU pushed the limits of the technology farther.

The Cart uses different types of stereopsis to identify the objects around it and be able to understand its surroundings in 3D. It would use this information it collected around it in order to plan its path through its environment. As the robot moves through its environment, it would stop and analyse its surroundings again and plan it’s next run. The robot and the system it was run by was reliable for short runs but it had a very overlying problem: it was really slow. The Cart would move 1m every 10 to 15 minutes. The Cart would move in lurches. When it moved one meter it would stop completely as it analysed its surroundings to move again. It would repeat this process many times as it made its way through different obstacle courses. The Cart managed to successfully drive through many 20 meter courses, taking about 5 hours on each. It was tested on other obstacle courses but it failed in very revealing ways.

The Rover’s system was designed with flexibility of movement with complete control and perception of its surroundings as complete priorities. Some of its features include an omnidirectional steering system, a dozen on-board processors for essential real-time tasks, and a large remote computer to be helped by a high-speed digitizing/data playback unit and a high-performance array processor. There is also a distributed high-level control software. This software is similar in organization to the Hearsay II speech-understanding system.