Brandon Woodard is a PhD candidate at Brown University and graduate researcher for the Brown Visual Computing group.
Brandon received his B.S. at CSUMB and M.S. at Brown both in Computer Science. He has interned for Nvidia, Dolby Laboratories, MIT Lincoln Laboratory, NASA’s Jet Propulsion Laboratory, and Intel. His current research is focused on interaction techniques for augmented reality interfaces.
LinkedIn: https://www.linkedin.com/in/brandonjwoodard/
Contact: brandon_woodard@brown.edu
Off-the-shelf smartphone-based AR systems typically use a single front-facing or rear-facing camera, which restricts user interactions to a narrow field of view and small screen size, thus reducing their practicality. We present Cam-2-Cam, an interaction concept implemented in three smartphone-based AR applications with interactions that span both cameras.
Read the paper here: https://arxiv.org/abs/2504.20035
Our team presented VRoxy at UIST 2023. VRoxy is designed to support large-area collaboration from a much smaller office using a VR-driven robotic proxy, accommodating asymmetric remote collaboration.
Read the paper: https://dl.acm.org/doi/abs/10.1145/3586183.3606743
Press releases:
News @ Brown: https://www.brown.edu/news/2023-10-26/vroxy
Cornell Chronicle: https://news.cornell.edu/stories/2023/10/robot-stand-mimics-your-movements-vr
In this work we investigated redirected walking in virtual reality using saccadic eye movements to determine when to rotate the participant in their virtual environment. We tested participants by placing them into a virtual environment where they worked on completing a series of simple tasks while being rotated according to their saccadic eye movements at a randomized degree of rotation. After testing, our results displayed a marked difference between the participant’s movement in the virtual environment versus the real-world environment. This difference shows the potential of using saccadic eye movement to keep a user immersed even within a smaller than optimal play area.
Read the paper: https://link.springer.com/chapter/10.1007/978-3-031-20716-7_16
We developed a model to identify more of the underlying structures captured by the Global Ecosystem Dynamics Investigation LiDAR (GEDI) aboard the International Space Station.
Read the paper here:
Under the leadership of Dr. Robert Anderson I developed a virtual reality platform that displays space exploration data from the NASA's Jet Propulsion Laboratory.
The goal of this project is to stream line JPL's mission data onto a single platform where researchers and educators can access remotely without the need to visit the Regional Planetary Image Facility (RPIF) located at JPL. Mission data available on the platform include digitized maps, spacecraft manuals, rare photos, and 360 views of the target planet to learn about the mission in a first person perspective.
The information is organized by a timeline of missions and their respective space bodies (planets, asteroids, interstellar,etc.).
Target platforms are Google Daydream/Cardboard and Oculus.
Watch the video to see how it works here:
Working under the supervision of Dr. Krzystof Pietroszek and Dr. Christian Eckhardt, we developed a low-cost robotic hand with flex sensor hardware as an alternative to myo electric sensors typically used for prosthetics.
This project was featured in CSUMB's University Magazine under 'A New Reality'. Read here!
Under the supervision of Dr. Krzysztof Pietroszek, my team and I develop a new way to interact in Virtual Reality. This involves an off the shelf smart watch and is intended for mobile-based head mounted displays (eg. Samsung Gear VR, Google Daydream/Cardboard)
This was later accepted in the ACM's VRST conference where I presented our work in Munich, Germany at the Technical University of Munich.
Publication can be found here, ACM Digital Library