Predict Engine Interface overview

Predict Engine - General Overview

Designing the 3D scene

When using the Predict Engine plug-in, the 3D scene (cameras, geometries, lights, environments) is entirely defined using the Unity interface :

  • Game Objects are created in the Hierarchy, their Transform component defines their position in the scene,

  • Components are added to the Game Objects to define their properties : Camera, Light, Skybox, Mesh Renderer/Mesh Filter (geometries and material assignment).

Predict Engine - being a spectral and polarized rendering engine - requires some additional settings unsupported by Unity. For these settings, additional components are available to complete the existing ones : UVR Camera Settings/UVR Physical Sensor, UVR Light Settings, UVR Skybox Settings (geometries don't require additional settings). These components are detailed in the appropriate sections of this documentation.

Some UI components are also available to enable the user to move the camera or change elements in the scene at runtime.

Starting the render

Once the 3D scene is ready, the Predict Engine can be started and the scene will automatically be configured from Predict Unity. In Unity, the Predict Engine render can be displayed either in front of the Unity render in the Game view (Predict Engine overlay) or in a dedicated window (Predict Engine window). See below for more details.

Waiting for the simulation to be ready

Behind the scene, Predict Engine produces true-to-life images thanks to advanced techniques devoted to a single task: computing the correct amount of energy carried out by each ray of light captured by our virtual camera as if all light-matter interactions were taking place in a real scene. This is a highly demanding task in practice that virtually requires simulating all the bounces that light emitted from the sources of our virtual scene may undergo before reaching the camera.

This process is commonly referred to as "Global Illumination" and can be resolved in many ways. Internally, Predict Engine computes the global illumination based on the Monte-Carlo "Path Tracing" algorithm. The following video from Walt Disney Animation Studios provides a practical introduction to Path Tracing:

Tracing light paths from the camera and sorting rays drastically help to reduce rendering times. Nevertheless, the global illumination process remains a computation-intensive task. Hundreds to thousands of light path samples per pixel (spp) might still be necessary to obtain accurate results.

In the case of Path Tracing (and Monte-Carlo methods in general), insufficient light path samples translate to noise in the final image. Such noise can be removed by generating more samples per pixel, thus increasing the rendering time. This can be observed in the following examples (results obtained using a workstation powered by 3 NVIDIA RTX 2080 graphics cards):

32 spp (computed in less than 1 second)

512 spp (computed in 3 seconds)

131 000 spp (computed in 15 minutes)

The correct amount of path samples required to obtain noise-free images depends on the scene content and, more specifically, on the complexity of the paths that light follows before reaching the camera.

For instance, indoor scenes take longer to render when most light comes from the external environment and must cross windows to lit the objects. In the same vein, caustics effects due to refractions or shiny surface reflections might require more time to render smoothly as they involve two or more bounces between the camera and the light source. This can be observed in the following examples:

32 spp (computed in less than 1 second)

512 spp (computed in 3 seconds)

131 000 spp (computed in 15 minutes)

As can be seen, noise remains visible in the caustics areas at 512 spp (middle image). In contrast, at equivalent spp, the result obtained in the previous example with the textile material is almost noise-free.

Editing the scene at runtime

When Predict Engine is rendering, some interactions are available : moving the camera, changing the tone mapper, changing materials, rotating the environment, ....


For any other interaction (changing a geometry or the environment, moving a geometry, loading a new scene...), the Predict Engine scene must be reloaded in order to be updated. Making these interactions easier is a feature we are currently working on and it should be available soon.

Predict Engine overlay - Predict Engine embedded in Unity applications with scripted interactions

The Predict Engine overlay is a texture containing the Predict Engine render and appearing in front of the Unity render in the Game view. It is best fitted to be included in applications or to work with scripted user interactions. It works in the following way :

(1) : The 3D scene (cameras, geometries, lights, environments) is defined and constructed entirely in Unity,


(2) : Additional settings specific to Predict Engine can be defined using specific components,


(3) : The overlay mode is enabled via the menu "PredictSuite/Engine/Game Overlay/Enabled", and the application is started as usual in Unity, Predict Engine is started in the background and starts loading the scene,


(4) : When the scene is loaded, the Predict Engine simulation appears in front of the Unity render in the Game tab. If the Game camera moves or if interactive settings are edited by script or manually, the Predict Engine render will be updated automatically.

The Predict Engine overlay can also be enabled and disabled via scripting.

Predict Engine window - Predict Engine in Unity for image generation

The Predict Engine window has a dedicated Unity tab. It is best fitted to render static images and prepare scenes for Predict Engine. It works in the following way :

(1) : The 3D scene (cameras, geometries, lights, environments) is defined and constructed entirely in Unity,


(2) : Additional settings specific to Predict Engine can be defined using specific components,


(3) : The renderer view is opened via the menu "PredictSuite/Engine/Engine View",


(4) : Predict Engine is started directly in the engine view using the Play button in the window, the Predictive Engine simulation appears in this window when the scene is loaded.


The Renderer Window contains the following buttons :

Hand mode : when this mode is selected, you can move the simulation in the window. You can also move the simulation in any mode using the mouse scroll button.

Play/Stop : starts the render if current state is off, stop the render if current state is on.

Zoom mode : when this mode is selected, you can zoom on the rendered image in the window. You can also zoom on the rendered image in any mode using the mouse scroll.


Pause : pauses/unpauses the render. The color of the button is an indicator of the state (colored = unpaused, gray = paused).

Picking mode (Play mode only): when this mode is selected, you can pick a pixel in the simulation to get details on the pixel's value (normal, depth, channels, ... depending on the sensor).

Reload :

  • if the "auto reload" option is disabled in the Preferences, reload the Predictive Engine scene with the updates made in the Unity scene,

  • if the "auto reload" option is enabled in the Preferences, enable/disable the auto reload mode. The color of the button is an indicator of the state (colored = auto reload enabled, gray = auto reload disabled).

Camera mode (Play mode only) : when this mode is selected, you can move the scene camera using the following keys :

  • Pan : scroll click and pan,

  • Orbit : left click or right click and pan,

  • Zoom : scroll wheel.

The camera transform definition will be automatically set to "Scene View".

Save Render : saves the render to a PNG or an EXR file.

Center simulation : resets the zoom and position of the simulation in the window to center it and fill in the available space.

Full Screen : displays the simulation in full screen.

1:1 Scale : resets the zoom and position of the simulation in the window to center it and view it with a 1:1 pixel scale.

Stats : shows/hides the current process statistics (state, render time, samples per pixel,...).

Camera selection : defines which camera in the scene is currently being rendered. The camera can be disabled in the Hierarchy. The camera can be defined using a Unity "Camera" component or a "Physical Sensor" component.

Camera Transform Definition : defines whether the position of the selected camera is defined by its "Transform" component or by the transform of the camera in the Scene view.

Transform Definition = Game View

Transform Definition = Scene View

Resolution selection : defines the resolution/ratio of the computed simulation. New resolutions and ratios can be defined using the resolution combo-box in the Game view.

Interactive settings : some elements in the scene can be edited interactively while Predict Engine is running. Depending on the definition of the scene, some of the icons here against will appear :

  • optics : defines the field of view of the camera, the lens radius and the focus distance,

  • environment : rotates the HDRI environment,

  • post-process : edits the current post-process.

Channel selection : defines which channel(s) of the camera are displayed on screen. The "Output" is the tri-channel output as defined on the "Camera Settings" component. The other available channels will be listed in this combo-box, depending on the sensor definition and selected layers (see the Optical Instruments section for more details). When the selected channel is not "Output", the post process options can be edited directly in the Engine view interactive settings (section above).