Daniel Farkash

I am a graduate student at the University of Pennsylvania studying Computer Science and Robotics. I received my Bachelor's in Computer Science form Cornell University.

My research has mainly focused on deep learning for robot autonomy and using vision for recognition and planning.

This page includes materials that have been used to present some of the work I have done on my past projects.

Autonomous Mobile Robotics Laboratory 

- The University of Texas at Austin

This project uses deep learning for robot navigation and aims to create an unsupervised method of learning costs for different terrains based on visual and inertial data from unlabeled human navigation demonstrations.

The resulting paper was accepted for presentation at the Conference on Robot Learning (CoRL) and will be published in Proceedings of Machine Learning Research (PMLR). 

The video to the right contains an example deployment of a learned terrain cost function on the lab's Spot robot. 

The video in the bottom right is a deployment over a 3 mile stretch.

The bar graph below shows the performance of unsupervised classification on the model's learned representations when compared to state of the art models.

2022-10-03-demo.mp4
AMRL Navigation Presentation

This presentation includes brief explanations and results of some of the different models I created for the project. 

Below is an updated diagram of the data collection method.

Laboratory for Intelligent Systems and Controls 

- Cornell University

This project uses Computer Vision methods for identification and prediction in Hockey Sports Analytics, including using a conditional generative adversarial model for homography estimation and using action recognition and keypoint detection for hockey puck position estimation. 

The video on the right shows the homography estimation and player detection in action. The field of view of the camera from the video above is estimated and projected on the 2D representation of the field below.






output_vid.mp4
output_rink.mp4
video1.mp4

Both of the videos of the left are of the same game and start at the same time. They show the results of using the trained models for detecting the players, which team they are on (and the referees), what actions they are taking (ex. shot, pass, dribble), and the location of the puck. 






The model to the left shows representations of the players (change color with action) and puck (in pink) projected on a 2D representation of the field.

Collaboration with the EmPRISE Lab 

- Cornell University

For this project, I collaborated with professors Rachit Agarwal and Tapomayukh Bhattacharjee to stress test ROS (Robot Operating System) with the goal of finding potential areas for improvement. My focus was on network capabilities when communicating with multiple autonomous systems.

Interaction ROS

Here is a short presentation of a frequency/delay test for the transmitting of LIDAR information across machines using ROS methods for communication.

Here is an explanation/trace of the data path of this interaction. I have included this experiment because it has helped reveal that the ROS communication protocol is not ideal for some common ROS use cases.

Data Path