1. From Cloud to Fog Robotics
"Fog Robotics is a branch of networked robots that balances storage, compute and networking resources between the Cloud and the Edge in a federated manner.”
Cloud Robotics use wireless networking, Big Data, Cloud Computing, statistical machine learning, open-source, and other shared resources to improve performance in a wide variety of robotic applications. A number of issues arise in communication with far away Cloud data centers, including: 1) the sheer volume of sensory data continues to increase, leading to a higher latency, variable timing, limited bandwidth access, 2) the security and privacy of the data is comprised in communication over heterogeneous networks over the internet.
Fog Robotics enable robots and IoT devices in homes and warehouses to leverage upon nearby Edge resources as well as distant Cloud data centers. Administrative boundaries of resource ownership restrict control of data within domains of trust at the Edge of the network. The term `Cloud Robotics' indicates the use of network resources at the center of the network (or `Cloud'), while `Fog Robotics' involves the use of networked resources at the Edge of the network (or `Fog').
2. Fog Robotics Architecture
A Fog Robotics architecture that uses the resources on the Cloud and the Edge of the network to meet lower latency requirements, while preserving the privacy and security of the data.
3. Surface Decluttering by Simulation to Reality Domain Adaptation
Surface decluttering by simulation to reality transfer with HSR: Non-private (public) synthetic images of cluttered floors using 3D meshes of household and machine shop objects (shown on left) are used for large-scale training of deep models on the Cloud. The trained deep models are subsequently adapted to the real objects (shown on right) by learning feature invariant representations with an adversarial discriminator at the Edge within a secured network.
4. Networked Execution Environments via Docker
Software components running in network-connected execution environments packaged and distributed via Docker images: (left) robot environment, (centre) control environment, (right) learning environment. An instance of the learning environment is used to train the deep adversarial object recognition model on the Cloud with the non-private synthetic data only, while another instance runs at the Edge of the network that adapts the Cloud model on real data to get invariant feature representations with an adversarial discriminator from the private (real) and the non-private (synthetic) data. Deploying the inference service of the control environment at the Edge significantly reduces the inference time in comparison to hosting the service on the Cloud.
5. Videos and Results
- Using models trained on Cloud with non-private synthetic data and models adapted on Edge with private real data gives better performance than models trained only on Cloud with synthetic data or Edge with real data
- Deploying the inference service on the Edge significantly reduces the inference time in comparison to hosting the service on Cloud
- Toyota HSR was able to pick 65 out of 69 objects in 85 grasp attempts in our initial round of experiments
6. Media and Links
- We participated in Fog Congress 2018 organized by the Open Fog Consortium
- Fog Computing for Robotics and Industrial Automation
- Cloud Robotics and Automation
- Google Cloud Robotics
- New York Times article on How Robot Hands are Evolving to Do What Ours Can by Mae Rayan, Cade Metz, and Rumsey Taylor
- NBC Media Coverage of Fog robot decluttering by Joe Rosato Jr.
- A Fog Robotics Approach in Siemens Future Maker Challenge
- SCHooL: Scalable Collaborative Human–Robot Learning
- Wired article on Dex-Net As A Service - DNaaS by Matt Simon
Acknowledgements: We thank Sanjay Krishnan, Michael Laskey, Zisu Dong, Thanatcha Panpairoj, Grant Wang, Raghav Anand, Daniel Seita, Jonathan Lee, Chris Powers, Richard Liaw, Ron Berenstein, Roy Fox, Jeff Mahler, Kenneth Lutz and Peng Wang for their helpful discussions and contributions.