Final program

The workshop on 20 September 2020 :

  • 09:30 : Opening

  • 09:45 - 10:15: Data-Driven Approach to Place Recognition and Collaborative Mapping using 3D Lidars, Dr. Renaud Dubé

  • 10:15 - 10:45 : Cooperative Perception and Localization for Cooperative Driving, Aaron Miller

  • 10:45 - 11:15 : Relaxed Quantization for Discretized Neural Networks, Prof. Efstratios Gavves

  • 11:15 - 11:45 : Safe and Efficient Reinforcement Learning for Behavioral Planning in Autonomous Driving, Edouard Leurent

  • 11:45 - 12:15 : Federated Machine Learning in the Real World, Nadav Tal-Israel, Edgify.AI

  • 12:30 - 14:00 : 1.5 hour Break

  • 14:00 - 14:15 : Demonstration by Edgify.AI on federated Learning

  • 14:15 - 14:45 : Collaborative Dense SLAM, Prof. John McDonald

  • 14:45 - 15:15 : Efficient and privacy friendly decentralized learning with an automotive perspective, Dr. Jan Ramon

  • 15:30 - 16:30 : Interactive Q&A session with speakers

  • 16:45: Workshop Closing

Speakers

Renaud Dubé completed his PhD at the Autonomous Systems Lab of ETH Zurich under the supervision of Prof. Roland Siegwart. His doctoral work focussed on machine perception and real-time localization & mapping for robots equipped with 3D Lidars. To this date, Renaud co-authored 30+ publications and participated in three patent applications. Passionate about teamwork and technology challenges, he and his teammates secured the first position at the Formula SAE Hybrid international competition in New-Hampshire 2012. In 2017 and 2018, Renaud acted as a coach of the winning team of the Formula Student Germany Driverless competition. This collaboration additionally led to receiving the Best Student Paper Award at the IEEE International Conference on Robotics and Automation (ICRA) in 2018. The same year, Renaud co-founded Sevensense Robotics AG where he now acts as Technology Lead. This Zurich-based company develops cutting-edge localization and navigation solutions for the next generation of autonomous service robots.

Tech lead & Co-founder at Sevensense Robotics AG

Title : Data-Driven Approach to Place Recognition and Collaborative Mapping using 3D Lidars

Abstract : Precisely recognizing places is a fundamental capability for collaborative mapping and re-localization in multi-agents systems. This task, however, remains challenging in unstructured, dynamic environments, where local features are not discriminative enough and global scene descriptors only provide coarse information. In this talk we present machine learning based techniques that allow us to globally localize autonomous vehicles equipped with 3D Lidar sensors. Our methods are based on the extraction of segments in 3D point clouds which offers increased invariance to view-point and local structural changes, and facilitates real-time processing of large-scale 3D data. Specifically, we leverage compact data-driven descriptors for performing multiple tasks: global localization, 3D dense map reconstruction, and semantic information extraction. Additionally, we present novel methods for recognizing places using only a single sparse 3D Lidar scan and for improving the performance of the said descriptors by augmenting them with visual information. We demonstrate the performance of our approaches using multiple experiments based on publicly available autonomous driving datasets (KITTI and NCLT). The implementation is available open-source along with easy to run demonstrations at www.github.com/ethz-asl/segmap

Aaron Miller is currently a graduate student in the Robotics Institute at Carnegie Mellon University and a member of the Search-Based Planning Lab. His research focuses on perception and tracking for autonomous driving. He has previously worked on perception, tracking, and motion planning systems for autonomous drones and autonomous vehicles.

Master’s student Robotics Institute at Carnegie Mellon

Title : Cooperative Perception and Localization for Cooperative Driving

Abstract : Fully autonomous vehicles are expected to share the road with less advanced vehicles for a significant period of time. Furthermore, an increasing number of vehicles on the road are equipped with a variety of low-fidelity sensors which provide some perception and localization data, but not at a high enough quality for full autonomy. We present a perception and localization system that allows a vehicle with low-fidelity sensors to incorporate high-fidelity observations from a vehicle in front of it, allowing both vehicles to operate with full autonomy. The resulting system generates perception and localization information that is both low-noise in regions covered by high-fidelity sensors and avoids false negatives in areas only observed by low-fidelity sensors, while dealing with latency and dropout of the communication link between the two vehicles. At its core, the system uses a set of Extended Kalman filters which incorporate observations from both vehicles’ sensors and extrapolate them using information about the road geometry. The perception and localization algorithms are evaluated both in simulation and on real vehicles as part of a full cooperative driving system.

Dr. Efstratios Gavves is an Assistant Professor with the University of Amsterdam in the Netherlands and Scientific Manager of the QUVA Deep Vision Lab. After Efstratios completed his PhD in 2014 at the University of Amsterdam, he worked as a post-doctoral researcher at KU Leuven, and then was invited to join the ranks at the University of Amsterdam. As part of the QUVA Lab he supervises and guides 12 doctoral and post-doctoral students on machine learning and computer vision. He has authored several papers in the top Computer Vision and Machine Learning conferences and journals, including CVPR, ICCV, ICLR, ICML, NeurIPS and others. He is also the author of several patents. Further, he has co-organized a series of workshops and tutorials on Spatiotemporal Representations, Video Understanding and Zero-Shot Learning. Efstratios teaches Deep Learning in the MSc in Artificial Intelligence at the University of Amsterdam. All material is available on the project website, uvadlc.github.io, and his course has been shortlisted worldwide for those interested to delve into Deep Learning. His research focus is on temporal machine learning and dynamics, novel spatiotemporal representations and deep machine learning.

Assistant Professor at University of Amsterdam

Title : Relaxed Quantization for Discretized Neural Networks

Abstract : Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.

Edouard Leurent is currently at Inria and Renault Group, where he will be defending his PhD on sequential decision-making for autonomous driving. His research focuses on the interplay of multi-agent interactions, behavioural uncertainty and risk management. Prior to that, he graduated from Mines Paristech and worked for three years on guidance, navigation and control algorithms for consumer drones as a Control Systems Engineer at Parrot.

PhD Student in Reinforcement Learning, Inria SequeL Team, Inria Valse Team, Renault Group

Title : Safe and Efficient Reinforcement Learning for Behavioral Planning in Autonomous Driving

Abstract : The task of behavioural planning in a dense traffic setting consists in an autonomous agent driving among many other vehicles whose behaviors are uncertain, and whose interactions induce complex couplings in the dynamics of the nearby trafic. Ensuring safety thus requires an accurate understanding of these interactions to produce tight and exhaustive probabilistic trajectory forecasts. On the other hand, the safety requirement can often lead to over-conservative behaviors, which motivates the design of learning algorithms that can adapt to any desired level of risk.

Nadav Tal-Israel is the CTO and Co-founder of Edgify.

I hold a Master’s degree in Electrical Engineering, from Tel Aviv University. I am leading Edgify R&D which has beens focused on Federated Learning technology that leverages the edge devices.

A skilled technology executive, with over 10 years' experience in machine learning algorithms spread across various fields in embedded systems, sensor driven device development including audio, video and image signal processing.


CTO and Co-Founder, Edgify

Title : Federated Machine Learning in the Real World

Abstract : In recent years, Large Batch training and Federated Learning have emerged as ways for training models in a distributed manner over edge devices, keeping the data on the devices themselves. This holds the immense promise of extending Machine Learning to scenarios that are constrained by data privacy limitations or simply offer vast data and computational power in this form. There is no straightforward way, however, to simply turn any classical ML/DL system into such an edge-distributed one. In this talk, we will cover a few of the topics and challenges we’ve encountered on our way towards a more systematic solution:

1) Large Batch vs. Federating Learning: Large batch training is the classical training method, adapted to the distributed case. This adaptation doesn’t always fit, for example in scenarios with an internet connection that is only partially available. Federated Learning, as a more radical solution offered for this kind of scenario, aims to save on communication rounds. This doesn’t always translate to a reduction in the amount of data transmitted, however, while bringing in new problems. We will give a basic map of the tradeoffs landscape.

2) Compression: With communication bandwidth being the bottleneck in many edge-device-powered scenarios, various compression methods have been suggested. However, they can many times be detrimental or even destructive for the learning task.

3) Non-IID data distributions: When substantial independent training is done on the edge device, the question of whether its local data represents the overall data becomes critical. This is a central challenge that may determine if Federated Learning can succeed at all, but also poses further finer issues, with regard to certain architectural components, which have to be adapted to this unique learning scheme.

Edgify Federated Learning Demonstration


Federated Machine Learning Demonstration : "Edgify will demonstrate how federated edge training can be leveraged for retail applications. Barcodless item classification, used on self-checkout machines, can be very beneficial to retailers for improving user experience and for loss prevention. However it presents significant challenges, mainly conditions variability across the stores (lighting, camera angles, etc.) and data drift across time (changing suppliers, new items appearing, old items becoming irrelevant, etc.). We will show how distributing the model training across the self-checkout machines can help in addressing these challenges."

John McDonald is a Professor in the Department of Computer Science at Maynooth University, and also an affiliate of the Maynooth University Hamilton Institute and Assisting Living and Learning Institute. His research interests include computer vision, robotics, and AI, focussing on the development of spatial perception and intelligence for autonomous mobile robotics. His research has been funded under various research programmes from Science Foundation Ireland, the EU, Enterprise Ireland, and the Irish Research Council. He is currently a Funded Investigator in Lero, the Science Foundation Ireland Research Centre for Software, a named supervisor in the SFI Centre for Research Training in the Foundations of Data Science, and collaborator on the SFI Blended Autonomy Vehicles Spoke. Previously, he was a visiting scientist at University of Connecticut, the National Centre for Geocomputation (NCG), and CSAIL at MIT.

Professor in the Department of Computer Science at, NUI Maynooth

Title : Collaborative Dense SLAM

Abstract : We present a new system for live collaborative dense surface reconstruction. Cooperative robotics, multi participant augmented reality and human-robot interaction are all examples of situations where collaborative mapping can be leveraged for greater agent autonomy. Our system builds on ElasticFusion to allow a number of cameras starting with unknown initial relative positions to maintain local maps utilising the original algorithm. Carrying out visual place recognition across these local maps the system can identify when two maps overlap in space, providing an inter-map constraint from which the system can derive the relative poses of the two maps. Using these resulting pose constraints, our system performs map merging, allowing multiple cameras to fuse their measurements into a single shared reconstruction. The advantage of this approach is that it avoids replication of structures subsequent to loop closures, where multiple cameras traverse the same regions of the environment. Furthermore, it allows cameras to directly exploit and update regions of the environment previously mapped by other cameras within the system. We provide both quantitative and qualitative analyses using the synthetic ICL-NUIM dataset and the real-world Freiburg dataset including the impact of multi-camera mapping on surface reconstruction accuracy, camera pose estimation accuracy and overall processing time. We also include qualitative results in the form of sample reconstructions of room sized environments with up to 3 cameras undergoing intersecting and loopy trajectories.


Dr. Jan Ramon obtained a PhD in computer science in 2002 from KU Leuven and joined INRIA in 2015. Starting in 2009 he led an ERC Starting Grant on data mining and machine learning with network-structured data. He was also PI in several national and international collaborative research projects. He is member of the editorial boards of leading journals such as Machine Learning Journal, Journal of Machine Learning Research and Data Mining and Knowledge Discovery. He has published in numerous high-quality peer reviewed journals. While his core expertise is in data science (including among others machine learning, algorithms, statistics, knowledge representation and privacy), he has a keen interest in multi-disciplinary research involving data-rich fields and made contributions to fields including medicine, biology, chemistry, logistics and transportation.


Senior Researcher, MAGNET (MAchine learninG in information NETworks) team, INRIA Lille

Title : Efficient and privacy friendly decentralized learning with an automotive perspective

Abstract : Modern cars can generate more data worth analyzing than can be affordably transported by a classic 4G network.

There is an increasing need for intelligent strategies to process and analyze data, and as each has some drawbacks probably a combination of strategies will be needed in the future. In this presentation, I'll discuss decentralized learning, with special attention for the automotive setting. Among others, I'll argue decentralized learning allows for improved privacy-friendliness and improved efficiency. I'll also discuss a number of limitations, where restricted forms of coordination can be useful.

Interactive Q&A Session