Robotics Engineer

LinkedIn | GitHub | achille.verheye at gmail

Mars Rover

As a side project, I'm building the JPL open source rover which is a simplified scale model of the Mars Curiosity rover. This is my first venture into space robotics and I'm using this platform to develop and test new algorithms for navigation and heterogeneous sensor calibration. Also for following me to the park while carrying my beers :)

Service Robots - Peanut Robotics

Starting with a team of 3 engineers, built out the software stack for autonomous service robots used for commercial cleaning:

  • set up and tuned 3D mapping and localization; heterogeneous sensor fusion
  • wrote a custom motion planner from scratch using fast analytical inverse kinematics solver (C++) derived for a new type of arm (7DOF spherical wrist with offsets at elbow and shoulder, until then an unsolved problem). Set up Descartes (ROS-Industrial) for reactive cartesian toleranced motion planning and 100x'ed planning speed using proprietary techniques
  • heterogeneous sensor self-calibration using hand-eye calibration
  • sensor integration in C++
  • set up and maintained clustered database infrastructure using CouchDB to contain operational data
  • Use PCL in C++ to compare a pre-made map with live pointcloud information using efficient octomap operations, highlighting changes in the robot's environment
  • grasping using deep learning
  • ROS
  • various: full robot control using xbox teleop, Travis CI integration and test suite, computer infrastructure, distributed processing on embedded systems, ...

Also included in the hiring process, conducting interviews as well as most aspects of growing a sustainable early-stage startup.

Service robots - Penn

Development of artificial intelligence and learning algorithms for low-cost service robots for CIS700 at Penn

  • gitmaster: responsible for organization and management of the codebase for a group of 28 robotics engineers (lots of pull requests!)
  • member of the manipulation focus group. Implemented a grasp handler pipeline that takes in a request from our task server to pick up a specific object (detected through our vision pipeline), generates a set of possible grasping points (using gpd, built on caffe), and executes the best grasping point using MoveIt!.
  • Code development in ROS
  • stitched URDFs from scratch, centralized the launching for all nodes developed by each focus group.

Open-source transformations package for ROS using dual quaternions


Dual quaternions have been undervalued in the robotics community considering their interesting (and quite beautiful) mathematical properties (see: K. Daniilidis - Hand-eye calibration using dual quaternions). This is a humble attempt to integrate this means of transformations in my company's work and share this effort with the robotics open-source community in the hope of convincing other roboticists to use this format. This was also my first project to use Continuous Integration (CI).

If you're a robotics engineer or mathematician; give me your feedback - or better - contribute!

Climbing robots - rigrade

Started a robotics venture that creates (software enabled hardware) climbing robots for vertical access and surveillance. Received three seed funding rounds from MIT Sandbox and won the Wharton Summer Venture Award ($10k)

  • full development in ROS
  • built robot from scratch, machined custom parts myself
  • business development, business plan creation

patent pending - COMING SOON!

Ryan, Kit, and Achille at our office in The Engine, an incubator in Cambridge, MA.

Quadcopters, Penn Advanced Robotics

Wrote Matlab code for flying quadcopters including:

  • a linear and a nonlinear, geometric controller that could handle aggressive maneuvres
  • a polynomial trajectory generator for minimum acceleration (cubic), minimum jerk (quintic), or minimum snap (heptic) polynomial
  • a path planner (Dijkstra and A*)
  • a pipeline that combined these and implemented this on a CrazyFlie quadcopter

Several trajectories flown on a CrazyFlie quadcopter. The actual path was recorded using a Vicon tracking system. The quad had to fly through each waypoint. The trajectory generator calculated a trajectory using a (here quintic) polynomial for each segment. The controller then executed these trajectories.

Orientation Estimation using Unscented Kalman Filter

An Unscented Kalman Filter was implemented from scratch and used in quaternion representation to fuse and filter data from an accelerometer and a gyroscope. The underlying state aimed to estimate the underlying orientation the Inertial Measurement Unit achieved. Attached to the IMU was a camera which took sequential pictures during rotation. Filtered orientation was then used to stitch these images together to create a panorama of the environment.

The panorama was created in multiple steps. First, I sampled the orientations at the corresponding time stamps as there are many more orientation estimates than camera images. I then represent the pixels of each image onto a sphere around the platform. They are then unrolled for vectorized computation of transformation. They are then rotated according to the transformation matrix that is extracted from the quaternion estimate. Next, these rotated pixels are then projected onto a cylindrical space. Finally, this cylindrical representation is then unwrapped to create the panorama.


Naive panorama creation based on the filtered orientation of the platform. Note that this projection did not make use of image stitching, which can greatly enhance the quality and even correct the orientation estimation.

Simultaneous Localization And Mapping (SLAM) for a humanoid using a particle filter

A particle filter was implemented from scratch and used to perform Simultaneous Localization And Mapping (SLAM) for a humanoid walking robot equipped with LIDAR and IMU sensors. A 2D map was generated, providing a basis for the robot to navigate while also correcting the estimate of its position and orientation. Stratified resampling was used to update the particles each step.

The object-oriented programming paradigm was used to write good code for a complex project like this. It not only allows more intuitive reasoning, but also provides a good framework for extending the code.

one of the maps generated by the algorithm. In red its trajectory is shown.

the humanoid used in this project


OOP class diagram for the core code

Barrel detection using Gaussian color models and other computer vision techniques

Implemented a Gaussian color model that was trained on a small data set of images containing one or more red barrels.



3rd place in the annual Robockey competition at the University of Pennsylvania. These very low cost autonomous robots use a constellation of IR LEDs on the ceiling to orient themselves.

  • Responsible for electronic circuit design and soldering of three robots
  • Wrote high and low level functionality in C.

Robockey was a part of MEAM510 at the University of Pennsylvania. Other projects included a self-balancing wheeled inverted pendulum, building a speaker from scratch, and building a low-cost RC car.

With Prof. Dr. Jonathan Fiene

Machine learning

While taking the class Machine Learning (CIS519) at Penn, I implemented the following algorithms from scratch in Python:

  • Neural Network for digit recognition from images
  • Reinforcement learners: Q-learning, value iteration, policy iteration, feature extractors. Implemented and trained approximate q-learning for the pacman game (see video below) and 'taught' a crawler how to walk in simulation
  • general K-Means; image segmentation using K-Means (see image below)
  • Boosted Decision Tree for classification
  • Online Naive Bayes
  • Linear, Polynomial, and Logistic regression
  • Support Vector Machines using different kernels

Final report on the use of machine learning techniques to predict start-up success.

It is interesting to note that the learner never tried to 'eat' the ghosts after eating the large pill. It learned that it could win simply by avoiding the ghosts and eating all pills. If a high reward would be given to the learner for eating a ghost, it might learn to do so given that it is allowed to explore. It might be necessary to shift to deep q-learning or other more powerful learners to learn such hierarchical strategies.

Haptic environment

For the class 'Introduction to Robotics', MEAM520 at Penn, My team created a virtual environment consisting of multiple objects and surfaces. The Phantom robot senses the position of the end-effector and displays that on screen in a virtual box. By moving the end-effector of the robot with their fingertip, the user can interact with these objects on the screen. The simulator will feed the forces generated by the interactions to the motors on the robot so that the user can physically feel these surfaces and objects.

These surfaces and objects are purely mathematical constructs. We created a point of attraction (pulls your hand to a specific location when within its reach), a switch, a surface with different spatial texture, a viscous fluid, and a ball.

Path planning and collision avoidance using potential fields

Implementation in Matlab of path planning for a simple articulated robot using potential fields with collision avoidance. The end pose has points of attraction that correspond to points on the robot. Objects have a sphere of influence around them and a repulsive force within that sphere that becomes exponentially larger the closer any point on the robot gets to the object's surface. The forces are calculated in Cartesian space as opposed to joint space and are then converted to joint space using the Jacobians.

The algorithm does not consider local minima and could get stuck in certain configurations. A solution would be to add random walks. In the demonstration video, you can see that the second random end pose is not completely reached. This is because the gradient descent step size is not matched with the goal tolerance. The algorithm is not optimal and slow but very easy to understand.

Light-painting robot

For the class Intro to Robotics, MEAM520 at Penn, my team planned trajectories for a PUMA robot to follow. The robot was mounted with an LED. A long-exposure photograph captured the robot moving so that the end result was a 'light painting' by a robot.

With Prof. Dr. Katherine Kuchenbecker

long exposure photograph of the painting. The robot can be vaguely seen in the background.

Edelmanlab, MIT - multi-modal stent testing

Design and development of a high-throughput multi-modal stent testing device.

  • Responsible for designing and building a machine that twists, bends, and extends stents while allowing fluids to run through them to simulate and research biodegradable stent degradation ex-vivo for extended periods of time.
  • Built a full model in SolidWorks. Machined several custom parts at the MIT Edgerton machine shop.


working product completed by Boston Scientific

Edelmanlab, MIT - computer vision

Co-lead on a project for predicting outcome of Trans Aortic Valve Replacement (TAVR) using pre-operative CT scans. Developed OOP Python computer vision algorithms for segmenting images and automatically extracting features for large datasets. The algorithm improved upon the state of the art automatic calcium detection on very noisy data. Responsible for maintaining a large code base.

With Elazer Edelman, MD PhD