Speakers from Academia
Speakers from Academia
Speakers from Industry
Prof. Erika Abraham (RWTH)
Prof. Michael Beetz (University of Bremen)
Dr. Leif Christensen (DFKI)
Prof. Francesco Maurelli (Jacobs University Bremen)
Dr. Yury Brodskiy (EIVA)
Dr. Peter Kampmann (ROSEN)
Dr. Stephanie Kemna
Dr. Yulia Sandamirskaya (Intel)
Dr. Jakob Schwendner (Kraken Robotik)
Curious to get to know these brilliant professionals?!
Read their short bios below, and a summary about what to expect from their talks!
Speakers from Academia
Erika Abraham graduated at the Christian-Albrechts-University Kiel (Germany), and received her PhD from the University of Leiden (The Netherlands) for her work on the development and application of deductive proof systems for concurrent programs. Then she moved to the Albert-Ludwigs-University Freiburg (Germany), where she started to work on the development and application of SAT and SMT solvers. Since 2008 she is a professor at RWTH Aachen University (Germany), with main research focus on SMT solving for real and integer arithmetic, and formal methods for probabilistic and hybrid systems.
Talk Title: SMT-based Planning - Some recent developments
Planning is a highly challenging central module in robotics systems. For high-level planning, besides traditional approaches also SMT solving has been shown to be a good alternative solution. In this talk, we report on some recent developments and ideas in this area.
Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). He received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996, and his Venia Legendi from the University of Bonn in 2000. In February 2019 he received an Honorary Doctorate from Örebro University. He was vice-coordinator of the German cluster of excellence CoTeSys (Cognition for Technical Systems, 2006--2011), coordinator of the European FP7 integrating project RoboHow (web-enabled and experience-based cognitive robots that learn complex everyday manipulation tasks, 2012-2016), and is the coordinator of the German collaborative research centre EASE (Everyday Activity Science and Engineering, since 2017). His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognition-enabled perception.
Talk Title: Knowledge representation & reasoning in CRAM-a cognitive architecture for robot agents accomplishing underdetermined manipulation tasks
Robotic agents that can accomplish manipulation tasks with the competence of humans have been one of the grand research challenges for artificial intelligence (AI) and robotics research for more than 50 years. However, while the fields made huge progress over the years, this ultimate goal is still out of reach. I believe that this is the case because the knowledge representation and reasoning methods that have been proposed in AI so far are necessary but too abstract. In this talk I propose to address this problem by endowing robots with the capability to internally emulate and simulate their perception-action loops based on realistic images and faithful physics simulations, which are made machine-understandable by casting them as virtual symbolic knowledge bases. These capabilities allow robots to generate huge collections of machine-understandable manipulation experiences, which robotic agents can generalize into commonsense and intuitive physics knowledge applicable to open varieties of manipulation tasks. The combination of learning, representation, and reasoning will equip robots with an understanding of the relation between their motions and the physical effects they cause at an unprecedented level of realism, depth, and breadth, and enable them to master human-scale manipulation tasks. This breakthrough will be achievable by combining leading-edge simulation and visual rendering technologies with mechanisms to semantically interpret and introspect internal simulation data structures and processes. With this talk we will give an outlook of applying CRAM to underwater missions.
After completing his master's degree in computer science (Dipl.-Inf.) in 2008, Leif Christensen obtained his PhD from the Faculty of Mathematics and Computer Science at the University of Bremen in 2019. He is currently heading the maritime robotics group at the German Research Center for Artificial Intelligence – Robotics Innovation Center (DFKI – RIC) and was in charge of several projects related to maritime and space robotics over the last years. His previous experience includes probabilistic robotic navigation with a special focus on magnetic field-based localization and maritime applications. He has been the head of the navigation and planning group at DFKI RIC and is currently an appointed member of the International Advisory Board of SENAI/CIMATEC Brazil and the German maritime research think tank "Zukunftsforum Ozean". Dr. Christensen is a reviewer for various journals and conferences in the area of robotics and for the European Space Agency (ESA) BIC Northern Germany. Dr. Christensen also holds a degree in law from the University of Freiburg.
Talk Title: Autonomy in marine robotics - AI or automation?
The talk will first give a brief overview on current robotic systems in the maritime context, from floaters and gliders over remotely operated vehicles (ROVs), hybrid ROVs, to autonomous underwater vehicles (AUVs) and subsea crawlers with different grades of autonomy. We will then take a deeper look into the software architectures and autonomous capabilities of such systems and discuss autonomous mission planning, task decomposition and execution, concepts like behaviour trees, approaches to sensor fusion and environment representation and how systems can reason about their own state and marine environment and act accordingly. The talk will also showcase two real-world examples of successful application of machine learning methods to marine robots in order to identify the motion model and to learn magnetic field distortions for navigation purposes. During the whole talk, we try to address the question, how far we have gotten in the field of marine robotics towards truly autonomous systems and behaviours.
Dr. Francesco Maurelli is a Professor in Marine Systems and Robotics at Jacobs University Bremen (Germany, EU), where he also serves as Program Chair for the Robotics and Intelligent Systems Program. He has obtained his Ph.D. at the Oceans Sytems Lab, Heriot-Watt University (Edinburgh, Scotland) with a thesis on intelligent AUV localisation. He has been Scientific Manager at the Technical University of Munich (Germany, EU) where he leads the Echord++ program, to support moving robotics technology from the lab to the market. After a research stay at the Massachusetts Institute of Technology (Cambridge, MA, USA), in the framework of a Marie Curie Fellowship, he has accepted a faculty position in Jacobs University Bremen. Dr. Maurelli's research interests are focused on persistent autonomy for marine robotics, autonomous navigation, intelligent decision making, sensor data processing and fault management.
Talk Title: The Long Way to Persistent Autonomy in Underwater Robotics
The future vision for ocean observatories foresees a multitude of permanently deployed autonomous vehicles, communicating with each other, and having intelligent decision-making capabilities in navigation, energy management, and error handling. There is however a stark contrast with respect to current capabilities: commercial vehicles are very good in blindly executing pre-defined paths, they easily get stuck when conditions change and limited on-board decision making severely limits the types and length of missions. What are the main challenges to bridge our future vision with the current reality? This talk will present the results of various projects in the last few years, aiming at increasing vehicles' capabilities. The role of a dynamically updated probabilistic knowledge-base opens up the door to the use of AI techniques like KR and reasoning. Classical AI Planning can be applied to robotics missions, instead of using efficient but not very adaptable state machines. Recent advances in Deep Learning can provide important improvements in sensor processing and object recognition. Adding on-board sensor-processing, decision making, fault management, underwater vehicles seem to significantly move forward in the field of autonomy. However, how promising research results, which are still validated during relatively controlled conditions, can be generalised to produce a real impact in the field?
Speakers from Industry
Dr. Yury Brodskiy is a Computer Vision specialist at EIVA. Together with his team, he works on developing underwater visual SLAM applicable to maritime survey and engineering operations. Yury holds a PhD in robotic with a focus on control theory and reliability. Before joining EIVA, he has contributed to the development of actively used high-reliability medical robots at Philips and high-precision lithographic systems at ASML.
Talk Title: Serving AI for underwater applications
EIVA software team focuses on developing sensor and platform-agnostic solutions for underwater applications to accommodate our customers' needs. Research in AI offers tools and methods that can provide invaluable automation possibilities for underwater engineering and survey, and thus we constantly evaluate and use new AI developments. In this talk, an AI-based solution for automating underwater pipeline inspection will be presented, which is a mature but constantly improving product by EIVA. It was initially deployed to our customers three years ago. The employed development methodology is now considered to be an AI industry standard. It is a data-hungry and computationally intensive approach that requires periodic adjustments but yields consistent high-quality results under a variety of conditions. We will look into the advantages and limitations of this method.
Peter Kampmann is a technical computer scientist with a doctor of engineering in robotics. Since 2019, he is leading a team of engineers in the area of subsea R&D at the ROSEN Technology and Research Center in Bremen, Germany. Previously, he worked as a researcher, project and teamleader at the Robotics Innovation Center of the German Research Center for Artificial Intelligence for 12 years. Besides his current work in the industry, he is giving lectures in marine robotics at the University of Oldenburg.
Talk Title: A roadmap towards integrating results from machine learning in online automated decision making for autonomous underwater robots
Besides a well-tuned network setup, the quality of processing results from machine learning algorithms depends to a large extent on the quality of the training dataset. If autonomous robots face situations where there is a lack of training data, there is no human to take over control like for assisted driving cars. This is especially true for underwater robots where a connection to a supervisor on land is limited by the bandwidth of underwater acoustics. Thus, misclassified results can lead to fatal decisions which have to be avoided at all costs especially for underwater robots as these often operate in critical environments. This talk approaches this topic and proposes some ways to start integrating results from machine learning algorithms into the autonomy of underwater robots.
Stephanie Kemna, PhD, is Research Manager at Maritime Robotics AS, based in Trondheim, Norway. She has 11+ years of experience developing software and algorithms for autonomous underwater vehicles and autonomous surface vehicles. She is currently coordinating several nationally and internationally funded R&D projects at Maritime Robotics. Dr. Kemna holds a BSc and MSc in Artificial Intelligence from the University of Groningen, as well as a PhD in Computer Science from the University of Southern California. From 2009 - 2012 she worked as a researcher at the NATO STO Centre for Maritime Research and Experimentation in Italy.
Talk Title: Towards fully autonomous surveys with teams of autonomous surface and underwater vehicles
At Maritime Robotics, we are developing autonomous system solutions for the maritime domain. Primarily, our focus has been on developing autonomous surface vehicles (ASVs), that are used with a suite of sensors for seafloor mapping and environmental monitoring. The recent pandemic has highlighted some of the benefits of using marine robots; requiring fewer crew for remote or autonomous operations. For operations with autonomous, and remotely operated, underwater vehicles, we see that currently crewed surface vessels are typically needed for deployment and recovery, as well as for overseeing operations. In this talk, I will briefly show some of the research projects that we are involved in, which aim at enabling ASVs to work in support of underwater vehicles, for data sharing and improved underwater localization, as well as future autonomous launch and recovery. Apart from putting demands on the ASV specific to the application, it also requires that we push the level of autonomy on the ASV itself.
Dr. Yulia Sandamirskaya leads the Applications Research team of the Neuromorphic Computing Lab at Intel. Her team develops spiking neuronal network based algorithms for neuromorphic hardware to demonstrate the potential of neuromorphic computing in real-world applications. Before joining Intel, Yulia led a group “Neuromorphic Cognitive Robots” in the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. She was chairing EUCog—the European Society for Artificial Cognitive Systems and coordinated an EU project NEUROTECH, creating and supporting the neuromorphic computing technology community in Europe.
Talk Title: Event-based vision and neuromorphic computing for autonomous robots
Event-based cameras are low-latency, low-power, low data rate, and high dynamic range visual sensors that harness the properties of biologically-inspired sensing and computing. These characteristics can be exploited in challenging environments, reducing motion blur and coping with conditions of varying contrast in the field of view. When combined with neuromorphic computing hardware that features efficient computing of event-based (spiking) neuronal networks and scalable on-chip plasticity, event-based vision and computing could lead to a new family of computing systems that will enable low-power and low-data AI for robotic agents working in challenging environments, including underwater robotics.
Jakob Schwendner is the managing director of Kraken Robotik GmbH and involved in the development of the SeaVision(TM) system at Kraken. Over 15 years of experience in the field of robotics, Jakob has worked at DFKI for 10 years and was leading the autonomy team for 3 years. Application areas at the DFKI have been mainly in the space and underwater robotics domain. He has participated in multiple research cruises and field campaigns. Jakob has a PhD in the area of navigation and sensor fusion using probabilistic methods.
Talk Title: Underwater Perception with Laser Scanners
Laser scanners are readily available for terrestrial applications and provide the basis for many autonomous systems. This is not easily transferred to underwater applications where optical sensors face multiple challenges. Given the right conditions laser scanners can provide important foundations for the perception pipelines for AI in underwater applications.