Fog Robotics

Cloud and Fog Robotics

"Fog Robotics is a branch of networked robots that distributes storage, compute and networking resources between the Cloud and the Edge in a federated manner”

Cloud Robotics use wireless networking, Big Data, Cloud Computing, statistical machine learning, open-source, and other shared resources to improve performance in a wide variety of robotic applications. A number of issues arise in communication with far away Cloud data centers, including: 1) the sheer volume of sensory data continues to increase, leading to a higher latency, variable timing, limited bandwidth access, 2) the security and privacy of the data is comprised in communication over heterogeneous networks over the internet.

Fog Robotics enable robots and IoT devices in homes and warehouses to leverage upon nearby Edge resources as well as distant Cloud data centers. Administrative boundaries of resource ownership restrict control of data within domains of trust. The term `Cloud Robotics' indicates the use of network resources at the center of the network (or `Cloud'), while `Fog Robotics' involves the use of networked resources along the Cloud/Edge continuum (or `Fog').

A Fog Robotics Approach to Robot Learning

A Fog Robotics approach to deep robot learning distributes resources between Cloud and Edge for training, adaptation, inference serving and updating of deep models to reduce latency and preserve privacy of the data.

Application to Surface Decluttering by Simulation to Reality Transfer

[Synthetic images]

Surface decluttering task entails a mobile robot to recognize and grasp objects in the environment, and put them into corresponding bins. Surface decluttering by simulation to reality transfer with HSR: Non-private (public) synthetic images of cluttered floors using 3D meshes of household and machine shop objects (shown on left) are used for large-scale training of deep models on the Cloud. The trained deep models are subsequently adapted to the real objects (shown on right) by learning feature invariant representations with an adversarial discriminator at the Edge. Example of domain invariant object recognition and grasp planning model output on a simulated image on (left) and real image on (right) seen from the robot head camera.

[Real Images]

  • Using models trained on Cloud with non-private synthetic data and models adapted on Edge with private real data gives better performance than models trained only on Cloud with synthetic data or Edge with real data

  • Deploying the inference service on the Edge significantly reduces the inference time by 4x in comparison to hosting the service on Cloud East coast

  • Toyota HSR was able to pick 86 percent of the objects over 213 attempts

Publications

  • [new] Ajay Kumar Tanwani, R. Anand, J. E. Gonzalez, K. Goldberg, "RILaaS: Robot Inference and Learning as a Service", IEEE Robotics and Automation Letters, 2020. [pdf][bibtex]

  • [new] Dezhen Song, Ajay Kumar Tanwani, Ken Goldberg. "Networked-, Cloud- and Fog-Robotics", Robotics goes MOOC, Springer Nature MOOCs, Bruno Siciliano (Editor), Springer. 2019 [pdf][bibtex]

  • [new] Nan Tian, Ajay Kumar Tanwani, J. Chen, M. Ma, R. Zhang, B. Huang, K. Goldberg, Somayeh Sojoudi, "A fog robotic system for dynamic visual servoing", IEEE International Conference on Robotics and Automation (ICRA), 2019. [pdf][bibtex]

  • [new] Ajay Kumar Tanwani, Nitesh Mor, John Kubiatowicz, Joseph E. Gonzalez, Ken Goldberg. "A Fog Robotics Approach to Deep Robot Learning: Application to Object Recognition and Grasp Planning in Surface Decluttering", IEEE International Conference on Robotics and Automation (ICRA), 2019. [pdf][bibtex]

Media and Links

Contact Us

Please send your feedback and suggestions to Ajay Tanwani: ajay.tanwani@berkeley.edu

Collaborators: We thank Flavio Bonomi, Moustafa AbdelBaky, Raghav Anand, Sanjay Krishnan, Michael Laskey, Thanatcha Panpairoj, Daniel Seita, Jonathan Lee, Chris Powers, Richard Liaw, Ron Berenstein, Roy Fox and Peng Wang for their helpful discussions and contributions.