We offer interesting topics for student theses and projects as well as paid internships. Feel free to contact me for more information.
Other topics than listed below can be defined upon request – simply drop by at KN-E211 or write an email to matej.hoffmann [guess-what] fel.cvut.cz
List of currently open topics supervised by Matej Hoffmann with a formal description is available here: https://hub.fel.cvut.cz/topics/semestral_projects (filter projects supervised by Matěj Hoffmann).
The list below includes a less formal list, including smaller projects for internships etc.
Extracting movement kinematics from videos with children and gaze direction estimation
To understand the sensorimotor development of children in the first two years after birth, it is important to have quantitative data about their movement kinematics - which joints they use, what are the velocity and acceleration profiles, where in 3D space can they reach etc. With the development of new computer vision algorithms based on deep learning, it is now possible to extract 3D kinematics from RGB videos of moving people only.
Modeling biological vision with neuromorphic spiking neural networks
Traditional robotic vision systems face significant computational and energy demands, posing a challenge. Active vision, in particular, requires continuous visual input and computation, necessitating a high volume of RGB frames in conventional computer vision technologies.
Unconventional approaches differ from the classical frame-based techniques by leveraging biologically-inspired vision sensors that mimic mammalian visual systems, known as event cameras. These cameras have demonstrated their capabilities in online robotic applications, showing significant reductions in latency, power consumption, and data redundancy (ref). Furthermore, the synergy between event-based vision sensors and Neuromorphic computing, mimicking a neuronal population through the use of spiking neural networks (SNNs), has the potential to further decrease the computational burden.
This project aims to leverage bio-inspired software and hardware principles to enhance the capabilities of spiking-based algorithms.
AirkSkin - artificial electronic skin: from industrial robots to reaching in clutter, social, and mobile robotics.
This family of projects involves the use of the artificial pressure-sensitive electronic skin Airskin (https://www.airskin.io/) that is used to turn industrial robots into collaborative. Standard behavior makes the machine stop once collision is detected. With a "debug controller", we are able to respond to the collision detection as we choose. There are four different projects that leverage this new use of the skin:
Robots learning to move with artificial sense of pain.
Localization and grasping with only haptic feedback.
Industrial manipulator with social touch.
Aiding LIDAR traversability perception through touch.
Measurement and modeling of robot collision forces for safe HRI
The aim of the project is to collect data from collisions and analyze the maximum safe speeds of the robot so that the limits of impact force are not exceeded. We use Universal Robots UR10e and KUKA LBR iiwa 7R800 manipulators. Building on previous work, we plan to generalize our approach to collisions in all directions and find a relationship to the effective mass of the robot.
Tactile stimulation for experiments with humans
The main objective of this project is first to build tactile stimuli that provide several different types of tactile inputs and control their different parameters, or instance, in case of vibrotactile stimuli, the parameters that need to be controlled are the frequency and amplitude; in case of pressure stimuli, it’s the pressure level and its duration; in case of a shear stimulus, shear force needs to be tightly controlled. The second main objective is to pilot the use of the sensors in tactile localization experiments with human adults, analyze experimental data and draw conclusions. The fundamental scientific outcome of this project will contribute to understanding how specialized tactile receptors contribute to the perception of touch and to the spatial localization of touch on the body. The outcome of the project could be applied to help people using limb prostheses where a goal is to find a non-invasive way of transmitting to the person the touch sensed by the prosthesis; for instance, knowing how different tactile receptors function could allow a more precise feedback from prostheses thus increasing their acceptance level and usability by end-users.
Credit: Science Photo Library
Haptic Feature Extraction from Robotic Grippers with/without tactile sensors
This project focuses on representing shapes of objects using haptic exploration. In computer aided design, one method to construct objects in 3D space is to describe them as a set of “haptic primitives” {vertices, edges, surfaces} that can be aggregated into the object. If we are able to obtain similar haptic primitives from different robotic setups, we can develop a haptic object representation that can be transferred across different robotic setups.
In this project, we will explore two methods of haptic primitive extraction:
With tactile sensors: the DIGIT optical tactile sensor https://digit.ml/ has a membrane surface that deforms when in contact with another object. This deformation is recorded by a camera and can be processed for touch interpretation (force and friction measurement, shape recognition). DIGIT can be attached to the Robotiq 2F-85 gripper to give it tactile sensing capabilities.
Without tactile sensors: Using the Robotiq 2F-85 gripper but without any tactile sensors, we will attempt to extract the same information using the concept of sensorimotor contingencies (SMCs): a bio-inspired method of sequential exploration that learns context and information over time.