What is this project about?
What is this project about?
This project is titled Implicit Neural Representation for Volumetric Video Workflows
A volumetric video is a 3-D video is where viewers can view the scene from any position or angle. For example, if you want to watch football match in the metaverse you could watch the game from the centre of the pitch.
So, the University of Bristol and Condense Reality are collaborating to research and improve technology associated with live 3-D/volumetric productions.
During this collaboration we need relevant data to test our ideas, which is why we are looking for participants...
As there is limited data online, our main objective is to collect our own to improve the quality and reliability of our research.
Who is Condense Reality?
You can learn more about our collaborators here:
What "technologies" are we interested in researching?
There are two main research interests:
Creating 3-D models of live events*
Streaming 3-D models to viewers*
*Note that our research is not associated with any "generative AI" tools (like StableDiffusion or Dall-E)
💠
What is involved with (1)?
This areas of research is called "Novel View Synthesis" (also "Inverse Graphics") and we typically use a machine learning algorithm to learn a 3-D representation from a collection of images/videos. In our case, we are looking to generate better quality 3-D representations quickly.
💠
What is involved with (2)?
This areas of research involves "Multi-Video Compression" and "Implicit Neural Representations". Multi-video compression allows us to make many video streams extremely small and transfer them over the internet to end-users (viewers). This is done by representing many videos as an implicit neural representation - which is essentially a neural network.
Who is organising and funding this project?
This project is funded and organise by MyWorld and Innovate UK.