This project was inspired by the visual glitch that happens occasionally for video codecs. Before I knew of the term, I'd encounter the effect when skipping through videos in VLC Media Player back in the day.
Or as intentional edits like the gif here.
This is a digital effect known as Datamoshing, when outdated color information is being pushed around by the movement of the video.
I wanted to create this effect for an animated character, to give the impression of them being digital and glitched, like the Codex enemy from XCOM 2, or corrupting the visuals for a digital view, as in SOMA.
The effect relies on the view of the scene from the camera. I edited a First Person blueprint to hold two SceneCaptureComponent2D, one to capture the colors of the scene at the moment of the effect activates, and save it into a RenderTarget.
The other performs a Post-Process effect every frame during the effect, where it updates the captured colors based on the velocity buffer.
I had hoped to be able to utilize a plugin to generate a buffer for Optical Flow, the perceived vector of change for an image in motion. But after a couple of attempts, Unreal's built-in velocity buffer was my closest tool. It fortunately worked well with animated meshes, which my idea involved.
Velocity Buffer
The post-process material uses the velocity buffer to add an offset to where on screen it samples from.
Since we're in screenspace and want to add the effect based on a certain mesh on screen, we're using custom stencil to mask out the mesh. This mask ignores depthtesting, so the postprocess would always be visible even when the mesh is blocked from view, unless we use our own method in the Post-Process Material to test the depth.
To give the effect a little push outside of the stencil, the alpha value of the rendertarget is treated as a buffer for the life/strength of the effect as viewed on screen, updated by the stencil, and decaying over time.
Randomized screenspace blocks are added to decay the trail left behind. This was to enhance the digital nature of the effect.
The updated rendertarget is then used by a Post-Process Volume, which mixes between the effect and scene view based on the alpha.
A good start for the ideas that I had for moshing, but not quite complete. I had to jump through some hoops to get buffers I could use, but it's no real substitute for getting into using the RHI or RDG to get what I need. I have seen other experiments with datamoshing, and they look like they would be fun to achieve.
Different approach for scene capture. Relying on scene components to capture the scene had its problems (performance, not capturing final render), and the velocity buffer was no substitute for true optical flow (inconsistent result between animated and moving static meshes, delta buildup between captures, and projection issues). I had the ambition of utilizing AMD's FSR plugin to use their optical flow buffer, and UE's own final render buffer, but they weren't as open to access as I had hoped, and diving into these systems would have been too time consuming for portfolio work.
Learning about Jumpflood Algorithm (JFA) has given me ideas for smoothing the effect blend outside of mesh silhouettes.