Rishi Veerapaneni

Google Scholar

rishiv (at) berkeley (dot) edu

I am an undergraduate student at U.C. Berkeley graduating in 2020 with a B.S. in Electrical Engineering & Computer Science and a B.A. in Applied Mathematics.

Apart from classes, I have spent a significant part of my academic career doing research and teaching. I am currently doing research with Professor Sergey Levine in Berkeley AI Research, and currently teaching CS170 Algorithms with Professors Satish Rao and Prasad Raghavendra.

In the past, I have worked at Two Sigma, Lawrence Livermore National Laboratory with Professors Gerald Friedland and Kannan Ramchandran, and the Laboratory of Quantitative Imaging at Stanford University with Professor Daniel Rubin.

Teaching

I have been very active in teaching at UC Berkeley and have thoroughly enjoyed the experience. I am particularly interested in improving myself as a teacher and improving the internal (teaching assistant facing) structures of courses. Please see my teaching page for more information!

Research

I am currently pursing generalization in reinforcement learning from image observations. Our main idea is that decomposing, interpreting, and planning the scene as a set of latent variables (as opposed to just one) can allow us to transfer knowledge from an object in one scene to another scene.

Although much of my research experience has been focused on the AI aspect of robotics, I am generally interested in applied EE and CS research with a mathematical foundation like controls, computer vision, and NLP.

Entity Abstraction in Visual Model-Based Reinforcement Learning

Rishi Veerapaneni*, John D. Co-Reyes*, Michael Chang*, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine

Entity Abstraction in Visual Model-Based Reinforcement Learning. Conference on Robot Learning (CoRL), 2019

Paper / Project webpage / Code

We introduced a framework for model-based planning that predicts and plans with learned object representations without supervision. The key idea behind our approach is to frame model-based planning under the language of a factorized HMM that processes a set of hidden states independently and symmetrically. This approach gives us permutation invariance, order invariance, and count equivariance by collapsing the combinatorial complexity along the object dimension. We show on a combinatorially complex block-stacking task that we are able to achieve almost three times the accuracy of non-latent-factorized video prediction model and outperform an oracle model that assumes access to object segmentations.


This work also appeared in:

Object Abstraction in Visual Model-Based Reinforcement Learning. Perception as Generative Reasoning (PGR) workshop, NeurIPS 2019

Tricking Neural Networks: Create your own Adversarial Examples

Daniel Geng, Rishi Veerapaneni

Article published on January 10, 2018 at ML@Berkeley

Article webpage

A fifteen minute read that introduces the concept of adversarial examples and how to construct them (figuratively and literally). We walk the readers through code snippets that show how different types of adversarial examples can be created.

Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis

Assaf Hoogi, Arjun Subramaniam*, Rishi Veerapaneni*, Daniel Rubin

Adaptive estimation of active contour parameters using convolutional neural networks and texture analysis, IEEE Transactions on Medical Imaging, vol. 36, no. 3, March 2017

Paper

We created a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected.