I am a student at Rochester Institute of Technology studying Motion Picture Science and Community Centered Digital Media, Technology, and Communications. Passionate about telling stories to bring people closer together, no matter what the medium. From theme parks and video games to film and animation, dedicated to developing expertise in Virtual Production, Immersive Interactive Installations, Imaging System Engineering, Projection Mapping, Motion Capture, Virtual Reality, and Volumetric Reconstruction. By combining technical innovation, engineering skills, and community-driven communication, to create experiences that bridge the digital and physical worlds and foster meaningful connections.
Me in a Cool Sweater my Sister Made Me
This is a fun project I started as a method to display volumetric video captured for my thesis. Inspired by the multi-plane camera of animation fame. The picture frame combines stacked pepper's ghost illusions with eye tracking to provide dynamic occlusion correction. Real time image processing allows the viewer to observe any image with 2 + 1/2 D depth, with full volumetric capabilities on the way! Below is the link to it's github repository, where I will be adding full documentation soon.
https://github.com/aidanmontag13/MultiPlaneDisplay/tree/main
For my summer project, which quickly spiraled into my senior capstone, I have been working on an open-source volumetric capture system. Building upon the PiTrack camera I designed last year, I have been writing software to automatically calibrate intrinsic and extrinsic parameters, capture synchronized uncompressed feeds, create point clouds based on depth maps, project 3d point clouds, and prepare and organize data into the Colmap output structure while bypassing the need for structure from motion through Colmap. The goal is to create a toolset where students at RIT can capture volumetric data just as easily as our Vicon motion capture system, and overall create a affordable, artist friendly workflow to use volumetric video as a filmmaking tool. More to come!
While TAing the virtual production course, a constant issue we face has to do with the way in-camera visual effects techniques disrupt traditional color management pipelines. On an LED volume, it is not good enough to simply calibrate a display to a colorimetric standard. The narrow spectral emission of the emitters, non-Luther-Ives spectral sensitivity, and "preferred" colorimetric encoding of the camera lead to a mismatch between the virtual art departments intent, and the captured image. This implementation, based upon research from the article linked below (lookout for my cameo), provides a simple and straightforward method to generate a camera to wall color correlation matrix, and export that matrix in the form of a 3D Lut which can be easily implemented in Unreal Engine or the Media Engine of your choice.
https://github.com/aidanmontag13/Virtual-Production-Calibration
https://mijonline.smpte.org/mijonline/library/page/april_2025/43/
One of my responsibilities at Magic Spell Studios is to coordinate and direct live virtual production demos for campus events. My main additions from prior demonstrations have been adding a higher sense of theatricality, interaction, and narrative, to make a truly immersive experience. This requires preparation of ICVFX shots and environments, programming sequences, as well as providing informative and engaging presentation on the technical process. We take the audience through each step of the virtual production workflow, highlighting camera tracking, motion capture, lighting, and the limitations that come with the process. We then enlist the audience's help, giving them superpowers, to make a full action sequence come to life. This is one of my favorite responsibilities since it allows me to both entertain and educate, as well as experiment with fun techniques that may not be practical on a real set.
For my "Machine Learning for Computer Vision" final project. I scripted a simple head tracker that uses yolo v8 pose to locate facial features using my laptops webcam, and calculates the 3D positions of the users head using camera intrinsic's. Then, the calculated rotation and location of the head are sent to Unreal Engine over Open Sound Control, which moves a blueprint actor in real time, with a bit of interpolation. This was a fun project, and a great opportunity to synthesize my my knowledge of machine learning, image processing and game development. Additionally, it opens doors for more sophisticated tracking for immersive projects like my deployable VR cave.
This is a project I worked on for my 3D modeling class. I already had good experience shading, but this was a great opportunity to improve my geometry and UV unwrapping skills. Additionally, it was a great chance to move some of my Blender knowledge over to Maya.
After completing the first stage of the stairwell project in the summer of 2024, my primary object objective became overhauling MAGIC's motion capture lab into an Interactive Development and Capture Studio. This meant creating a space that would provide “production-quality” marker-based motion capture for large-scale projects, “game-ready” markerless motion capture for projects with quicker turnaround times. To complement the stairwell, this project also served to offer students a maker space for developing and testing deployable interactive displays and exhibits, and providing researchers with an RGB camera based volumetric capture space to experiment with and develop four-dimensional media.
I needed to keep myself busy over winter break, so I made PiTrack. A common problem we have at MAGIC is a lack of an "everything" camera that can adapt to various applications. (MultiSpectral, RGBD, camera tracking, volume capture, mocap, etc). I designed PiTrack as a modular raspberry pi based system that is easy to reproduce, modify and write code for. Check out the Github below!
https://github.com/aidanmontag/PiTrack
As part of an ongoing project at Magic Spell Studios, I designed and helped install a 400-square-foot, 270-degree surround projection system at the studio’s central staircase. My responsibilities included selecting projector specifications, determining optimal placement, and sourcing appropriate installation equipment. I also researched occlusion limitations and explored alternative surface materials to maintain accurate color representation in high ambient light conditions. In addition, I handled keystone correction and output blending, set up network control, and built a streamlined workflow through Pixera. I created templates for RIT artists to produce a wide range of content for the installation and developed interactive systems in Unreal Engine, empowering artists to bring their creative visions to life.
3d Projector layout for previsualization and calibration.
"Vaporware" by Ashlyn Kreiss
"Roc City Mural" by MAGIC Animation Team
"Ritchie" by Annelise Wall
Concepts for shrouds to cover the projectors, and improve the visual language of the installation.
Fish!
As a way to test our motion capture and retargeting workflow's I made a quick Godzilla short, enjoy!
As a personal endeavor and offshoot of my projection mapping project, I have been building an extreme budgetary virtual reality cave experience using anaglyph 3d techniques, out-of-commission classroom projectors, and an Xbox Kinect to create immersive interactive experiences, that are easily deployable anywhere. The second part of this project has been developing a fast-paced shooting gallery style experience for the system, in which the player will use their hands to fire laser beams at incoming baddies, reminiscent of Spiderman Webslingers and Toy Story Mania in the Disney Parks. On a more personal note, I find virtual reality much more exciting as a communal space, that can be shared with people. Headsets are amazing, but inherently socially isolating. With room based VR, I hope we can use immersive experiences to bring people closer together.
External View of Cave Experience.
FPV of Cave Experience
Frames is a short film, telling the story of a war veteran who processes trauma through animation. I wrote, directed, and handled the cinematography and visual effects for this project with the help of my group in our Production 1 class. This was my first time directing, and the first time on set for most of our group, which made it both a real challenge, as well as a fabulous chance to experiment and collaborate with fresh eyes. This piece is pretty rough around the edges, but It was a fabulous experience for me, and a real chance to grow as an artist.
For this project, our class constructed compact Raspberry Pi-based robots that navigated an arena autonomously and played "tag" using Yolo image segmentation and OpenCV in Python. We then utilized an unreal engine to projection map dynamic environments into the arena, which our robots would have to adapt to. Lastly, we developed an IR-based outside-in-tracking system to move virtual robots in the unreal environment in real-time, which enabled us to projection map dynamically onto the real robots, and play the classic game "snake" against each other with tails the robots are trained to avoid.
Demo with projecting a motion tracked face mesh to create an interactive talking character.
Robot Navigating a Basic Arena.
Arena with Desert Themed Projection Mapping.
Arena with Arctic Themed Projection Mapping.
Arena with Rave Themed Projection Mapping, and Dynamic Virtual Spotlight Following Robot.
Segmented Model Showing Robots Perception of Arena.
Over the last year, I have had the opportunity to assist the Rochester Subway Archival Team at RIT by creating 3d reconstructions through photogrammetry, gaussian splatting, and neural radiance fields of the subway and surrounding area. Then presenting them in virtual reality using unreal engine to be fully explorable and interactive. This was not only a great technical challenge, but also an excellent chance for me to better understand the history of the city I just moved to.
Gaussian Splatting reconstruction of subway interior in Unreal Engine.
Sample of Data Set Used for Gaussian Splatting Training.
Photogrammetry reconstruction of Broad Street Bridge in Unreal Engine.
Sample of Dataset Used for Photogrammetry Reconstruction.
I was given the opportunity to help assist the Irish-American Heritage Memorial by creating a previsualization of the upcoming site and creating a video presentation to show it off. This project required a photoscan of a model of the proposed memorial, which was overhauled in Unreal Engine, and placed into a Gaussian Splat of the surrounding area. This was a super fun project, and a great chance to give back to the community.
As part of my roles at RIT Marketing and Communications, I directed trio of high-energy 360 videos for the school's TikTok page. These projects gave me an excellent opportunity to experiment with different techniques utilizing 360-degree capture to accomplish otherwise unobtainable shots.
SOFA 517- Virtual Production
SOFA 311- Image Capture and Production Technology
SOFA 312- Digital Post Production Technology
IMGS 180- Object Oriented Scientific Computing
IMGS 261- Linear and Fourier Methods in Imaging
IMGS 361- Image Processing
IMGS 221- Vision and Psychophysics
IMGS 351- Fundamentals of Color Science
IMGS 251- Radiometry
SOFA 209- Introduction to 3D Modeling
IMGS 362- Machine Learning For Image Analysis
SOFA 517- IT Fundamentals for Digital Media
DHSS 337- Media Narrative
COMM 201- Public Speaking
COMM 223- Digital Design in Communication
When I was little, me and Dad would always build model kits together. I started getting back into the hobby last year, and these are some I'm especially proud of.
Here's some silly stuff, enjoy!
Not 100% Sure What to Call This
My Awesome Brother
Our Awesome Galaxy
Even Less Sure What To Call This
My Awesome Dog
My Other Awesome Dog
Our Awesome Galaxy Over Bryce Canyon
Yosemite National Park
Fall River at Night