Publications

View on Google Scholar

For the majority of my publications, a link to the open-access library HAL is provided. If you are unable to access a paper, please drop me an email.

Videos accompanying articles can also be accessed on my Youtube channel

Courses and Tutorials

Algorithms and Techniques for Virtual Camera Control. R. Ranon, C. Lino, Q. Galvane, M. Christie. [eg] [materials]

Eurographics Tutorials, 2016.

Camera control is required in nearly all interactive 3D applications and presents a particular combination of different technical challenges. This tutorial presents recent and novel research ideas to handling a user's viewpoint on a scene in interactive, semi-automatic, and fully declarative camera control situations, covering a range of techniques from path-planning, visibility computation, optimal viewpoint computation and continuity editing. Some of the tools, algorithms and datasets that are presented are also made available to the community.

International Journals (peer-reviewed, sorted by CORE ranking)

Intuitive and Efficient Camera Control with the Toric Space. C. Lino, M. Christie. [acm] [full-article] [videos]

ACM Transactions on Graphics (TOG) - Proceedings of SIGGRAPH, 2015.

A large range of computer graphics applications require users to position and move viewpoints in 3D scenes. In this paper, we introduce the Toric space, a novel and compact representation for intuitive and efficient virtual camera control. We first show how visual properties are expressed in this Toric space and propose an efficient search technique for automated viewpoint computation. We then derive a novel screen-space manipulation technique that provides intuitive and real-time control of visual properties. Finally, we propose an effective viewpoint interpolation technique which ensures the continuity of visual properties along the generated paths. The approach should quickly find its place in applications such as 3D modelers, navigation tools or 3D games.

Directing Cinematographic Drones. Q. Galvane, C. Lino, M. Christie, J. Fleureau, F. Servant, F.-L. Tariolle, P. Guillotel. [acm] [hal]

ACM Transactions on Graphics (TOG) - Presented at SIGGRAPH, 2018.

Quadrotor drones equipped with high quality cameras have rapidly raised as novel, cheap and stable devices for filmmakers. Professional drone pilots can create aesthetically pleasing videos in short time. However, the smooth – and cinematographic – control of a camera drone remains challenging for most users, despite recent tools which either automate part of the process or enable the manual design of waypoints to create drone trajectories. This paper moves a step further by offering a high-level control of cinematographic drones for the specific task of framing dynamic targets.

Real-time Anticipation of Occlusions for Automated Camera Control in Toric Space. L. Burg, C. Lino, M. Christie [CGF] [hal]

Computer Graphics Forum - Proceedings of Eurographics, 2020.

Efficient visibility computation is a prominent requirement when designing automated camera control techniques for dynamic 3D environment. In this paper, we introduce a novel GPU-rendering technique to efficiently compute occlusions of tracked targets in Toric Space coordinates – a parametric space designed for cinematic camera control. We then rely on this occlusion evaluation to derive an anticipation map predicting occlusions for a continuous set of cameras over a user-defined time window. We finally design a camera motion strategy exploiting this anticipation map to minimize the occlusions of tracked entities over time.

Directing the Photography: Combining Cinematic Rules, Indirect Light Controls and Lighting-by-Example. Q. Galvane, C. Lino, M. Christie, R. Cozot. [CGF] [hal]

Computer Graphics Forum - Proceedings of Pacific Graphics, 2018.

The placement of lights in a 3D scene is a technical and artistic task that requires time and trained skills. Most 3D modelling tools only provide a direct control of light sources. Approaches have been relying on automated or semi-automated techniques to relieve users from such low-level manipulations at the expense of an important computational cost. In this paper, guided by discussions with experts in scene and object lighting, we propose an indirect control of area light sources. Results demonstrate the benefits of the approach on the quick lighting of 3D characters, and further demonstrate the feasibility of interactive control of multiple lights through image features.

International Conferences (peer-reviewed, sorted by CORE ranking)

The Director's Lens: An Intelligent Assistant for Virtual Cinematography. C. Lino, M. Christie, R. Ranon, W. Bares. [acm] [hal] [videos]

ACM International Conference on Multimedia, 2011.

We present an intelligent interactive assistant for crafting virtual cinematography using a motion-tracked hand-held device that can be aimed like a real camera. The system employs an intelligent cinematography engine computing, at the request of the filmmaker, suitable camera placements for starting a shot. Suggestions represent semantically and cinematically distinct choices for visualizing the current narrative. They consider established cinema conventions along with the filmmaker's previous selections, and also his/her manually crafted camera compositions, by a machine learning component. The result is a novel workflow based on interactive collaboration of human creativity with automated intelligence. It enables efficient exploration of cinematographic possibilities, and rapid production of computer-generated animated movies.

Continuity Editing for 3D Animations. Q. Galvane, R. Ronfard, C. Lino, M. Christie. [aaai] [hal] [video]

AAAI Conference on Artificial Intelligence, 2015.

We describe an optimization-based approach for automatically creating well-edited movies from a 3D animation. While previous work has mostly focused on the problem of placing cameras to produce nice-looking views of the action, the problem of cutting and pasting shots from all available cameras has never been addressed extensively. In this paper, we review the main causes of editing errors in literature and propose an editing model relying on a minimization of such errors. We make a plausible semi-Markov assumption, resulting in a dynamic programming solution which is computationally efficient. Combined with state-of-the-art cinematography, our approach therefore promises to significantly extend the expressiveness and naturalness of virtual movie-making.

Real-Time Cinematic Tracking of Targets in Dynamic Environments. L. Burg, C. Lino, M. Christie.

Graphics Interface, 2021.

Tracking in a cinematic way a moving target inside a 3D dynamic environment remains a challenging problem. This requires to simultaneously ensure a low computational cost, a good degree of reactivity and a high cinematic quality despite sudden changes. In this paper, we draw on the idea of Motion-Predictive Control to propose an efficient real-time camera tracking technique which ensures these properties. Our approach relies on the predicted motion of a target to create and evaluate a very large number of camera motions using hardware ray casting. Our evaluation of camera motions includes a range of cinematic properties such as distance to target, visibility, collision, smoothness and jitter. Experiments are conducted to display the benefits of the approach with relation to prior work.

Efficient Composition for Virtual Camera Control. C. Lino, M. Christie. [acm] [hal] [videos]

ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2012.

Automatically positioning a virtual camera in a 3D environment given the specification of visual properties to be satisfied is a complex and challenging problem. Most approaches tackle the problem by expressing visual properties as constraints or functions to optimize, and rely on computationally expensive search techniques to explore the solution space. We show here how to express and solve the exact on-screen positioning of two or three subjects using a simple, yet robust and very efficient, technique which will find a wide range of applications in virtual camera control and more generally in computer graphics.

A Real-time Cinematography System for Interactive 3D Environments. C. Lino, M. Christie, F. Lamarche, G. Schofield, P. Olivier. [acm] [hal] [videos]

ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2010.

Developers of interactive 3D applications, such as computer games, are expending increasing levels of effort on the challenge of creating more narrative experiences in virtual worlds. As a result, there is a pressing requirement to automate an essential component of a narrative – the cinematography – and develop camera control techniques that can be utilized within the context of interactive environments. In this paper, we present a fully automated real-time cinematography system that constructs a movie from a sequence of low-level narrative elements (events, key subjects actions and key subject motions). It offers an expressive framework which delivers notable variations in directorial style. Our process relies on a viewpoint space partitioning (Director Volumes) that identifies characteristic viewpoints of relevant actions for which we compute the partial and full visibility. Our system represents a novel and expressive approach to cinematic camera control which stands in contrast to existing techniques that are mostly procedural, only concentrate on isolated aspects (visibility, transitions, editing, framing) or do not encounter for variations in directorial style.

Computational Model of Film Editing for Interactive Storytelling. C. Lino, M. Chollet, M. Christie, R. Ronfard. [springer] [hal]

Lecture Notes in Computer Science, 2011, International Conference on Interactive Digital Storytelling.

Generating interactive narratives as movies requires knowledge in cinematography (camera placement, framing, lighting) and film-editing (cutting between cameras). We present a framework for generating a well-edited movie from interactively generated scene contents and cameras. Our system computes a sequence of shots by simultaneously choosing which camera to use, when to cut in and out of the shot, and which camera to cut to.

Camera-on-rails: Automated Computation of Constrained Camera Paths. Q. Galvane, M. Christie, C. Lino, R. Ronfard. [acm] [hal]

ACM SIGGRAPH Conference on Motion in Games, 2015.

Though there is a range of techniques to automatically compute camera paths in virtual environments, none have seriously considered the problem of generating realistic camera motions even for simple scenes. Among possible cinematographic devices, real cinematographers often rely on camera rails to create smooth camera motions which viewers are familiar with. Following this practice, in this paper we propose a method for generating virtual camera rails and computing smooth camera motions on these rails. Our technique analyzes characters motion and user-defined framing properties to compute rough camera motions which are further refined using constrained-optimization techniques. Comparisons with recent techniques demonstrate the benefits of our approach and opens interesting perspectives in terms of creative support tools for animators and cinematographers.

Advanced Composition in Virtual Camera Control. R. Abdullah, M. Christie, G. Schofield, C. Lino, P. Olivier. [springer] [hal]

Lecture Notes in Computer Science, 2011, Smart Graphics.

Rapid increase in the quality of 3D content coupled with the evolution of hardware rendering techniques urges the development of camera control systems that enable the application of aesthetic rules and conventions from visual media such as film and television. One of the most important problems in cinematography is that of composition, the precise placement of elements in shot. Researchers already considered this problem, but mainly focused on basic compositional properties like size and framing. In this paper, we present a camera system that automatically configures the camera in order to satisfy advanced compositional rules. We have selected a number of those rules and specified rating functions for them, then using optimisation we find the best possible camera configuration. Finally, for better results, we use image processing methods to rate the satisfaction of rules in shot.

An Interactive Interface for Lighting-by-Example. H.N. HA, C. Lino, M. Christie, P. Olivier. [springer] [hal]

Lecture Notes in Computer Science, 2010, Smart Graphics.

Lighting design in computer graphics is essentially not a random process but one driven by both a technical and aesthetic appreciation of lighting. Though, some users may have difficulties in properly modifying the lighting parameters in order to achieve desired lighting effects. We present and demonstrate an approach to lighting design in applications where the expected result of the lighting design process is a 2D image. In this approach, the lighting-by-example method using perception-based objective function is used in combination with an interactive interface in order to optimize lighting parameters for an object or a group of objects individually, and the visual results of these separate processes are combined in the seamless generation of a final 2D image.

CollaStar: Interaction collaborative avec des données multidimensionnelles et temporelles. C. Perrin, M. Christie, F. Vernier, C. Lino. [hal] [videos]

Conférence Francophone sur l’Interaction Homme-Machine (IHM), 2013.

While, in the literature, there are many representations for the visualization of multidimensional data, few works have dealt with the control of values of these data over time. We offer Collastar, an interface allowing multiple users to collaboratively manipulate a set of time-varying parameters through relevant interaction and visualization techniques. Our interface is made of a central star-based representation, dedicated to collaborative manipulation tasks, and as many data visualization windows (Linear Wall of the temporal evolution of parameters) as of users. We use CollaStar to control a movie-making engine (manipulation of camera settings) and we quantitatively evaluatie our system with filmmaking experts.

Patents and Standards

Generating Enriched Light Sources Utilizing Surface-Centric Representations. C. Lino, T. Boubekeur, A. Salvi, S. Deguy. [link]

US Patent Application n° 20220165023


Modifying Light Sources within Three-dimensional Environments by Utilizing Control Models Based on Three-dimensional Interaction Primitives. T. Boubekeur, C. Lino, S. Deguy, A. Salvi. [link]

US Patent Application n° 20220148257.


Rendering Portions of a Three-dimensional Environment with Different Sampling Rates Utilizing a User-defined Focus Frame. C. Lino, T. Boubekeur. [link]

US Patent Application n° 20220172427.


Methods, System and Software Program for Shooting and Editing a Film Comprising at least One Image of a 3D Computer-generated Animation. W. Bares, C. Lino, M. Christie, R. Ranon. [hal]

US Patent n° 20130135315, European Patent n° 2600316 A1.


Posters and Demos (in international conferences)

A Smart Assistant for Shooting Virtual Cinematography with Motion-Tracked Cameras. C. Lino, M. Christie, R. Ranon, W. Bares. [acm] [hal] [videos]

ACM International Conference on Multimedia, 2011,

This demonstration shows how an automated assistant encoded with knowledge of cinematography practice can offer suggested viewpoints to a filmmaker operating a hand-held motion-tracked virtual camera device. Our system, called Director's Lens, uses an intelligent cinematography engine to compute, at the filmmaker's request, a set of suitable camera placements for starting a shot, representing semantically and cinematically distinct choices for visualizing a narrative. Editing decisions and hand-held camera compositions made by the user in turn influence the system's suggestions for subsequent shots. The result is a novel workflow that enhances the filmmaker's creative potential by enabling effcient exploration of a wide range of computer-suggested cinematographic possibilities.

Automated Camera Planner for Film Editing Using Key Shots. C. Lino, M. Chollet, M. Christie, R. Ronfard. [hal]

ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2011.

Generating films from 3D animations requires knowledge in cinematography (camera placement, framing and lighting) and editing (cutting between cameras). In applications where the user is already engaged in other tasks, such as playing a game, directing virtual actors, or narrating a story, it appears desirable to build systems that can make decisions about cinematography and editing. In this paper, we introduce a framework for generating a well-edited movie based on the rules of film editing. Our system computes a sequence of shots by simultaneously choosing which camera to use, when to cut in and out of the shot, and where to cut to. We cast film editing as a cost minimization problem in the space of possible shot sequences and provide an efficient search algorithm.

International Workshops

High-Level Features for Movie Style Understanding. R. Courant, C. Lino, M. Christie; V. Kalogeiton.

ICCV Workshop on AI for Creative Video Editing and Understanding. 2021. (Best Paper)

Automatically analysing stylistic features in movies is a challenging task, as it requires an in-depth knowledge of cinematography. In the literature, only a handful of methods explore stylistic feature extraction, and they typically focus on limited low-level image and shot features (colourhistograms, average shot lengths or shot types, amount ofcamera motion). These, however, only capture a subset ofthe stylistic features which help to characterise a movie (e.g.black and whitevs. coloured, or film editing). To this end, in this work, we systematically explore seven high-level features for movie style analysis: character segmentation, poseestimation, depth maps, focus maps, frame layering, cam-era motion type and camera pose. Our findings show thatlow-level features remain insufficient for movie style analysis, while high-level features seem promising.

Analyzing Elements of Style in Annotated Film Clips. H.-Y. Wu, Q. Galvane, C. Lino, M. Christie. [hal]

Eurographics Workshop on Intelligent Cinematography and Editing, 2017.

This paper presents an open database of annotated film clips together with an analysis of elements of filmic style related to how the shots are composed, how the transitions are performed between shots and how the shots are sequenced to compose a film unit. The purpose is to initiate a shared repository pertaining to elements of film style which can be used by computer scientists and film analysts alike. Current databases are either limited to low-level features (such as shots lengths, color and luminance information), contain noisy data, or are not available to the communities. The data and analysis we provide open exciting perspectives as to how computational approaches can rely more thoroughly on information and knowledge extracted from existing movies, and also provide a better understanding of how elements of style are arranged to construct a consistent message.

Toward More Effective Viewpoint Computation Tools. C. Lino. [hal]

Eurographics Workshop on Intelligent Cinematography and Editing (WICED), 2015

Proposed viewpoint computation techniques can be evaluated against their efficiency (computation time), but there is a lack of a proper evaluation of their effectiveness (how aesthetically satisfactory the viewpoints are). In fact, they often rely on the maximization of a single fitness function, built as a weighted sum (i.e. a pure tradeoff) over a set of critera, whose satisfactions are thus considered fully independent. In contrast, cinematographers' sense of a viewpoint's quality is far from a tradeoff. In this paper, we first introduce a range of aggregation functions supplementing the weighted sum, and enabling to express a broader range of relationships between criteria. We then propose to aggregate individual satisfactions in a hierarchical way, removing the need to tune weights. We finally propose to reduce the search to camera positions (i.e. from 7D to 3D), as we can better constrain the framing by separately optimizing the camera's orientation and focal length.

Insight: An Annotation Tool and Format Targeted Towards Film Analysis. B. Merabti, H.-Y. Wu, C. Sanokho, Q. Galvane, C. Lino, M. Christie. [hal]

Eurographics Workshop on Intelligent Cinematography and Editing (WICED), 2015.

In this paper, we propose an annotation language broadly suited to analytical and generative cinematography systems. The language observes the axes of timing, spatial composition, hierarchical film structure, and link to contextual elements in the film.

How Do We Evaluate the Quality of Computational Editing Systems? C. Lino, R. Ronfard, Q. Galvane, M. Gleicher. [hal]

AAAI Workshop on Intelligent Cinematography and Editing (WICED), 2014.

There is a pressing requirement for appropriate evaluations of proposed automated editing models and techniques. Indeed, though papers are often accompanied with example videos, showing subjective results and occasionally providing qualitative comparisons with other methods or with human-created movies, they generally lack an extensive evaluation. The goal of this paper is to survey evaluation methodologies that have been used in the past and to review a range of other interesting methodologies as well as a number of questions related to how we could better evaluate and compare future systems.

Film Editing for Third Person Games and Machinima. M. Christie, C. Lino, R. Ronfard. [hal]

FDG Workshop on Intelligent Cinematography and Editing (WICED), 2012.

Generating content for third person games and machinimarequires knowledge in cinematography (camera placement,framing, lighting) and editing (cutting between cameras).In existing systems, such knowledge either comes from thefinal user (machinima) or from a database of precompiledsolutions (third person games). In this paper, we presenta system that can make decisions about editing and au-tomatically generate a grammatically correct movie during game interaction.

National Communications

A Real-time Cinematography System for 3D Environments. C. Lino, M. Christie, F. Lamarche, P. Olivier. [hal]

Journées de l'Association Francophone d'Informatique Graphique (AFIG), 2009.

We propose a real-time method to automate the construction of a movie from a list of low-level elements (eg. character actions/motions, objects motions). Our system computes appropriate viewpoints on these elements, and operates editing cuts following cinematographic conventions defined by the user and the directorial style.

Theses

Virtual Camera Control using Dynamic Spatial Partitions. C. Lino. [hal]

PhD thesis in Computer Science. University Rennes 1, France, 2013.

In this thesis, we first propose a unique framework incoporating the 4 key aspects of virtual cinymatography (computing viewpoints, planning camera paths, editing, computing visibility). This expressive framework allows exploring a number of cinematographic style dimensions. We then propose a methodology enabling to combine the capacities of an automated system together with user interaction and creativity. Last, we present a novel, efficient, camera control model which reduces the search space from 6D to 3D. This model has the potential to replace a number of existing formulations.