1. From Cloud to Fog Robotics
"Fog Robotics is a branch of networked robots that distributes storage, compute and networking resources between the Cloud and the Edge in a federated manner”. -Tanwani et al, 2019
Cloud Robotics use wireless networking, Big Data, Cloud Computing, statistical machine learning, open-source, and other shared resources to improve performance in a wide variety of robotic applications. A number of issues arise in communication with far away Cloud data centers, including: 1) the sheer volume of sensory data continues to increase, leading to a higher latency, variable timing, limited bandwidth access, 2) the security and privacy of the data is comprised in communication over heterogeneous networks over the internet.
Fog Robotics enable robots and IoT devices in homes and warehouses to leverage upon nearby Edge resources as well as distant Cloud data centers. Administrative boundaries of resource ownership restrict control of data within domains of trust. The term `Cloud Robotics' indicates the use of network resources at the center of the network (or `Cloud'), while `Fog Robotics' involves the use of networked resources along the Cloud/Edge continuum (or `Fog').
2. A Fog Robotics Approach to Deep Robot Learning
A Fog Robotics approach to deep robot learning distributes resources between Cloud and Edge for training, adaptation, inference serving and updating of deep models to reduce latency and preserve privacy of the data.
3. Surface Decluttering by Simulation to Reality Transfer
Surface decluttering task entails a mobile robot to recognize and grasp objects in the environment, and put them into corresponding bins. Surface decluttering by simulation to reality transfer with HSR: Non-private (public) synthetic images of cluttered floors using 3D meshes of household and machine shop objects (shown on left) are used for large-scale training of deep models on the Cloud. The trained deep models are subsequently adapted to the real objects (shown on right) by learning feature invariant representations with an adversarial discriminator at the Edge within a secured network.
Deep domain invariant object recognition and grasp planning model output on a simulated image on (left) and real image on (right) as seen from the robot head camera.
4. Videos and Results
- Using models trained on Cloud with non-private synthetic data and models adapted on Edge with private real data gives better performance than models trained only on Cloud with synthetic data or Edge with real data
- Deploying the inference service on the Edge significantly reduces the inference time by 4x in comparison to hosting the service on Cloud East coast
- Toyota HSR was able to pick 86 percent of the objects over 213 attempts
[new]Ajay Kumar Tanwani, Nitesh Mor, John Kubiatowicz, Joseph E. Gonzalez, Ken Goldberg. "A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering", IEEE International Conference on Robotics and Automation (ICRA), 2019. [pdf][bibtex]
[new]Nan Tian, Ajay Kumar Tanwani, J. Chen, M. Ma, R. Zhang, B. Huang, K. Goldberg, Somayeh Sojoudi, "A fog robotic system for dynamic visual servoing", IEEE International Conference on Robotics and Automation (ICRA), 2019. [pdf][bibtex]
6. Media and Links
- We participated in Fog Congress 2018 organized by the Open Fog Consortium
- Fog Computing for Robotics and Industrial Automation
- Fog Robotics for Efficient, Fluent and Robust Human-Robot Interaction
- Cloud Robotics and Automation
- Amazon RoboMaker
- Google Cloud Robotics
- New York Times article on How Robot Hands are Evolving to Do What Ours Can by Mae Rayan, Cade Metz, and Rumsey Taylor
- NBC Media Coverage of Fog robot decluttering by Joe Rosato Jr.
- A Fog Robotics Approach in Siemens Future Maker Challenge
- SCHooL: Scalable Collaborative Human–Robot Learning
7. Contact Us
Acknowledgements: We thank Flavio Bonomi, Moustafa AbdelBaky, Raghav Anand, Sanjay Krishnan, Michael Laskey, Thanatcha Panpairoj, Daniel Seita, Jonathan Lee, Chris Powers, Richard Liaw, Ron Berenstein, Roy Fox, Kenneth Lutz and Peng Wang for their helpful discussions and contributions.