Research
I am specialized in Computer Animation, Rendering and Human-Computer Interfaces, for topics at the crossroads of Arts, Computer Graphics, and AI. One of my highest interests is to transpose knowledge and tools from photographers and cinematographers, to create better narrative experiences into virtual worlds, through the intuitive and efficient control of cameras, lights and objects. One of the main goals of my research is to build the next generation of narrative content creation tools.
This drives me to addressing challenges inherent to the full automation or user assistance in 3D control tasks ; the study of light source models and rendering methods ; the analysis of real data (eg from movies) to extract meaningful insights for future research in the domain. Supporting creative content creation, and in particular controlling cameras and lights is of interest in a large set of applications: e.g. 3D modelling and CAD tools, visualization / exploration of complex data or scenes (e.g. medical, architectural, cultural heritage, traffic flow), or creative industries (e.g VFX, animated movies, movie preproduction, 3D games, immersive TV).
I am collaborating with academic and industrial partners, mainly in Europe and Northern America (see my collaboration-map).
Below is an overview of my (already published) results on these topics. You can also find some accompanying videos of the papers on my YouTube channel, and find more details in projects' pages :
Directing the Photography: Combining Cinematic Rules, Indirect Light Controls and Lighting-by-Example
Intuitive and Efficient Camera Control with the Toric Space
The Director's Lens: An Intelligent Assistant for Virtual Cinematography
Enriched Light Sources (ELS)
Smart Prototyping Interfaces (Virtual Cameras and Lights)
Manipulation tools offered by 3D modelers still remain very naive compared to the high-level creative tasks users are pursuing, thus asking users a long training period before being able to finely tune objects, cameras and lights. In particular, I consider virtual cameras and lights are the poor children of 3D creation tools. Pretty no support is offered to their manipulation, asking users to cast it into a very naive combination of low-level mathematical operations such as translations, rotations and scalings. This is critical as very simple operations on the visual composition (e.g. changing the size of objects, their on-screen position, or the way each of them is lit) are counter-intuitive in such a low-level control space. In turn, this makes it very challenging and time-consuming for expert (and even more for non expert) users to design and animate the on-screen layout of scene elements. My research on this topic focuses on providing smarter tools and assistive interfaces bringing manipulation tasks closer to the users needs while remaining efficient.
Related publications:
Intuitive and Efficient Camera Control with the Toric Space [acm] [full-article] [videos]. C. Lino, M. Christie. ACM Transactions on Graphics (TOG) - Proceedings of SIGGRAPH, Vol. 34 (4), pp. 82:1-82:12, 2015, ACM New York, NY, USA.
The Director's Lens: An Intelligent Assistant for Virtual Cinematography [acm] [hal]. C. Lino, M. Christie, R. Ranon, W. Bares. In 19th ACM International Conference on Multimedia, 2011, Scottsdale Arizona, USA.
Directing Cinematographic Drones. Q. Galvane, C. Lino, M. Christie, J. Fleureau, F. Servant, F.-L. Tariolle, P. Guillotel. ACM Transactions on Graphics (TOG) - Presented at SIGGRAPH, 2018. [acm] [hal]
Directing the Photography: Combining Cinematic Rules, Indirect Light Controls and Lighting-by-Example. Q. Galvane, C. Lino, M. Christie, R. Cozot. Computer Graphics Forum - Proceedings of Pacific Graphics, 2018.
An Interactive Interface for Lighting-by-Example [springer] [hal]. H.N. HA, C. Lino, M. Christie, P. Olivier. Lecture Notes in Computer Science, 2010, Volume 6133/2010, Smart Graphics, Pages 244-252
CollaStar: Interaction collaborative avec des données multidimensionnelles et temporelles. [hal]. C. Perrin, M. Christie, F. Vernier, C. Lino. In 25ème Conférence Francophone sur l’Interaction Homme-Machine (IHM), 2013, Bordeaux, France.
Methods, System and Software Program for Shooting and Editing a Film Comprising at least One Image of a 3D Computer-generated Animation. W. Bares, C. Lino, M. Christie, R. Ranon. US Patent n° 20130135315, May 30th 2013.
A Smart Assistant for Shooting Virtual Cinematography with Motion-Tracked Cameras [acm] [hal. C. Lino, M. Christie, R. Ranon, W. Bares. In 19th ACM International Conference on Multimedia, 2011, Scottsdale Arizona, USA.
Dimensionality Reduction, Novel Representation Spaces (Cameras, Lights)
When trying to effectively control a real or virtual camera, a central issue is to address a highly non-linear feature; How to determine all its degrees of freedom -- expressed in a 7D configuration space (world coordinates) -- from a large set of constraints (e.g. screen size and position, view angle, head room, visual balance) -- mainly expressed in the 2D image space. To reduce the complexity of these problems, our main two contributions have been (1) a comprehensive model for virtual camera control (a dynamic spatial partition into Director Volumes) that allows a system to automatically reason at both a geometrical and a semantic level to position cameras, plan paths and make cuts, while following a user-defined directorial style and (2) a novel representation space for cameras (the Toric Space) which is both compact and robust, while it provides a significant dimensionality reduction -- it casts 7 degrees-of-freedom problems into 3 degrees-of-freedom problems.
Related publications:
Intuitive and Efficient Camera Control with the Toric Space [acm] [full-article] [videos]. C. Lino, M. Christie. ACM Transactions on Graphics (TOG) - Proceedings of SIGGRAPH, Vol. 34 (4), pp. 82:1-82:12, 2015, ACM New York, NY, USA.
Efficient Composition for Virtual Camera Control [acm] [hal]. C. Lino, M. Christie. In ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2012, Lausanne, Switzerland.
A Real-time Cinematography System for Interactive 3D Environments [acm] [hal]. C. Lino, M. Christie. In ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2012, Lausanne, Switzerland.
A Real-time Cinematography System for 3D Environments [hal]. C. Lino, M. Christie, F. Lamarche, P. Olivier. In Actes des 22èmes journées de l'Association Francophone d'Informatique Graphique, 2009, Arles, France.
Inverse-control problems (constrained-optimization)
Properly conveying the content of a (static or dynamic) virtual scene requires providing both informative and visually appealing (series of) viewpoints. One problem in computing such viewpoints is to provide effective (while still as efficient as possible) means to find how to set cameras, scene elements and lights in order to best highlight the content of the scene. This involves satisfying a wide range of aesthetic criteria, from the visual layout of a number of characters or objects, to the way they are lit. However, visual composition problems are often over-constrained and highly non-linear. These problems thus call for good mathematical formulations of such problems (what the criteria to satisfy are as well as how they combine together) and for optimization models that can efficiently, but also effectively, find a "good" solution.
Related publications:
Camera-on-rails: Automated Computation of Constrained Camera Paths [acm] [hal]. Q. Galvane, M. Christie, C. Lino, R. Ronfard. In ACM SIGGRAPH Conference on Motion in Games, 2015, Paris, France
Efficient Composition for Virtual Camera Control [acm] [hal]. C. Lino, M. Christie. In ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2012, Lausanne, Switzerland.
Advanced Composition in Virtual Camera Control [springer] [hal]. R. Abdullah, M. Christie, G. Schofield, C. Lino, P. Olivier. Lecture Notes in Computer Science, 2011, Volume 6815, Smart Graphics, Pages 13-24.
Toward More Effective Viewpoint Computation Tools [hal]. C. Lino. In Eurographics Workshop on Intelligent Cinematography and Editing, 2015, Zurich, Switzerland.
(Empirical) Knowledge formalization: procedural models (film grammar, storytelling)
When looking at how films are made, one can observe that the film editing process – the timing and assembly of shots into a continuous flow of images – is a crucial step in constructing a coherent cinematographic sequence (regardless of whether the footage is taken from real cameras, virtual cameras, or a mix of both). Cinematographers rely on empirical rules. However, these rules can be used very differently by two different film-makers, for many reasons. The main difficulty of reproducing the common practices of film-makers is to be able to provide expressive-enough and powerful mathematical expressions of what makes a "good" edit. The main challenges is therefore to formalize the quality of an edit, regarding only the story and the content of shot images, and to provide computational models of how to cut and assembe shots the "right" way (or e.g. as Spielberg or Zemeckis would do).
Related publications:
Computational Model of Film Editing for Interactive Storytelling [springer] [hal]. C. Lino, M. Chollet, M. Christie, R. Ronfard. Lecture Notes in Computer Science, 2011, 7069, International Conference on Interactive Digital Storytelling, Pages 305-308.
Automated Camera Planner for Film Editing Using Key Shots [hal]. C. Lino, M. Chollet, M. Christie, R. Ronfard. In ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2011, Vancouver, Canada.
Film Editing for Third Person Games and Machinima [hal]. M. Christie, C. Lino, R. Ronfard. In Workshop on Intelligent Cinematography and Editing, 2012, Raleigh NC, USA.
How Do We Evaluate the Quality of Computational Editing Systems? [hal]. C. Lino, R. Ronfard, Q. Galvane, M. Gleicher. In AAAI Workshop on Intelligent Cinematography and Editing, 2014, Québec City, Québec, Canada.
Continuity Editing for 3D Animations [hal]. Q. Galvane, R. Ronfard, C. Lino, M. Christie. In AAAI Conference on Artificial Intelligence, 2015, Austin, Texas, USA.
(Empirical) Knowledge formalization: data-driven models (analysis-synthesis)
Real cinematographers often rely on a number of key visual compositions, smooth camera motions and empirical editing rules which viewers are familiar with; each of them have a well-defined narrative goal, such as highlighting a character's action or motion. This calls for the provision of methods to easily reproduce such well-accepted camera shots, camera motions and edits. This can be done by relying on annotated databases of previously recorded camera shots, on input character's actions or motions, and on data-driven methods to analyze and synthesize such stereotypical film sequences. Here, the main problems are to be able to provide methods to (i) construct such film databases, (ii) analyze stereotypical editing styles, then (iii) synthesize new edits using a given film style. The underlying challenges being to provide well-adapted annotation formats, as well as learning/mining techniques that can deal with such data.
Related publications:
Camera-on-rails: Automated Computation of Constrained Camera Paths [acm] [hal]. Q. Galvane, M. Christie, C. Lino, R. Ronfard. In ACM SIGGRAPH Conference on Motion in Games, 2015, Paris, France
CollaStar: Interaction collaborative avec des données multidimensionnelles et temporelles. [hal]. C. Perrin, M. Christie, F. Vernier, C. Lino. In 25ème Conférence Francophone sur l’Interaction Homme-Machine (IHM), 2013, Bordeaux, France.
Analyzing Elements of Style in Annotated Film Clips. H.-Y. Wu, Q. Galvane, C. Lino, M. Christie. To appear in Eurographics Workshop on Intelligent Cinematography and Editing, 2017, Lyon, France.
Insight: An Annotation Tool and Format Targeted Towards Film Analysis. B. Merabti, H.-Y. Wu, C. Sanokho, Q. Galvane, C. Lino, M. Christie. In Eurographics Workshop on Intelligent Cinematography and Editing, 2015, Zurich, Switzerland
How Do We Evaluate the Quality of Computational Editing Systems? [hal]. C. Lino, R. Ronfard, Q. Galvane, M. Gleicher. In AAAI Workshop on Intelligent Cinematography and Editing, 2014, Québec City, Québec, Canada.