Research

I am specialized in Computer Animation, Rendering and Human-Computer Interfaces, for topics at the crossroads of Arts, Computer Graphics, and AI. One of my highest interests is to transpose knowledge and tools from photographers and cinematographers, to create better narrative experiences into virtual worlds, through the intuitive and efficient control of cameras, lights and objects. One of the main goals of my research is to build the next generation of narrative content creation tools

This drives me to addressing challenges inherent to the full automation or user assistance in 3D control tasks ; the study of light source models and rendering methods ; the analysis of real data (eg from movies) to extract meaningful insights for future research in the domain. Supporting creative content creation, and in particular controlling cameras and lights is of interest in a large set of applications: e.g. 3D modelling and CAD tools, visualization / exploration of complex data or scenes (e.g. medical, architectural, cultural heritage, traffic flow), or creative industries (e.g VFX, animated movies, movie preproduction, 3D games, immersive TV). 

I am collaborating with academic and industrial partners, mainly in Europe and Northern America (see my collaboration-map).

Below is an overview of my (already published) results on these topics. You can also find some accompanying videos of the papers on my YouTube channel, and find more details in projects' pages : 

Directing the Photography: Combining Cinematic Rules, Indirect Light Controls and Lighting-by-Example 

Intuitive and Efficient Camera Control with the Toric Space

The Director's Lens: An Intelligent Assistant for Virtual Cinematography

Enriched Light Sources (ELS) 

Smart Prototyping Interfaces (Virtual Cameras and Lights)

Manipulation tools offered by 3D modelers still remain very naive compared to the high-level creative tasks users are pursuing, thus asking users a long training period before being able to finely tune objects, cameras and lights. In particular, I consider virtual cameras and lights are the poor children of 3D creation tools. Pretty no support is offered to their manipulation, asking users to cast it into a very naive combination of low-level mathematical operations such as translations, rotations and scalings. This is critical as very simple operations on the visual composition (e.g. changing the size of objects, their on-screen position, or the way each of them is lit) are counter-intuitive in such a low-level control space. In turn, this makes it very challenging and time-consuming for expert (and even more for non expert) users to design and animate the on-screen layout of scene elements. My research on this topic focuses on providing smarter tools and assistive interfaces bringing manipulation tasks closer to the users needs while remaining efficient.

Related publications:

Dimensionality Reduction, Novel Representation Spaces (Cameras, Lights) 

When trying to effectively control a real or virtual camera, a central issue is to address a highly non-linear feature; How to determine all its degrees of freedom -- expressed in a 7D configuration space (world coordinates) -- from a large set of constraints (e.g. screen size and position, view angle, head room, visual balance) -- mainly expressed in the 2D image space. To reduce the complexity of these problems, our main two contributions have been (1) a comprehensive model for virtual camera control (a dynamic spatial partition into Director Volumes) that allows a system to automatically reason at both a geometrical and a semantic level to position cameras, plan paths and make cuts, while following a user-defined directorial style and (2) a novel representation space for cameras (the Toric Space) which is both compact and robust, while it provides a significant dimensionality reduction -- it casts 7 degrees-of-freedom problems into 3 degrees-of-freedom problems.

Related publications:

Inverse-control problems (constrained-optimization)

Efficient Composition for Virtual Camera Control. Lino & Christie, ACM SCA 2012

Properly conveying the content of a (static or dynamic) virtual scene requires providing both informative and visually appealing (series of) viewpoints. One problem in computing such viewpoints is to provide effective (while still as efficient as possible) means to find how to set cameras, scene elements and lights in order to best highlight the content of the scene. This involves satisfying a wide range of aesthetic criteria, from the visual layout of a number of characters or objects, to the way they are lit. However, visual composition problems are often over-constrained and highly non-linear. These problems thus call for good mathematical formulations of such problems (what the criteria to satisfy are as well as how they combine together) and for optimization models that can efficiently, but also effectively, find a "good" solution.

Related publications:

(Empirical) Knowledge formalization: procedural models (film grammar, storytelling)

When looking at how films are made, one can observe that the film editing process – the timing and assembly of shots into a continuous flow of images – is a crucial step in constructing a coherent cinematographic sequence (regardless of whether the footage is taken from real cameras, virtual cameras, or a mix of both). Cinematographers rely on empirical rules. However, these rules can be used very differently by two different film-makers, for many reasons. The main difficulty of reproducing the common practices of film-makers is to be able to provide expressive-enough and powerful mathematical expressions of what makes a "good" edit. The main challenges is therefore to formalize the quality of an edit, regarding only the story and the content of shot images, and to provide computational models of how to cut and assembe shots the "right" way (or e.g. as Spielberg or Zemeckis would do).     

Related publications:

(Empirical) Knowledge formalization: data-driven models (analysis-synthesis)

Real cinematographers often rely on a number of key visual compositions, smooth camera motions and empirical editing rules which viewers are familiar with; each of them have a well-defined narrative goal, such as highlighting a character's action or motion. This calls for the provision of methods to easily reproduce such well-accepted camera shots, camera motions and edits. This can be done by relying on annotated databases of previously recorded camera shots, on input character's actions or motions, and on data-driven methods to analyze and synthesize such stereotypical film sequences. Here, the main problems are to be able to provide methods to (i) construct such film databases, (ii) analyze stereotypical editing styles, then (iii) synthesize new edits using a given film style. The underlying challenges being to provide well-adapted annotation formats, as well as learning/mining techniques that can deal with such data.

Related publications: