Differentiable Graphics with TensorFlow 2.0

Deep learning has introduced a profound paradigm change in the recent years, allowing to solve significantly more complex perception problems than previously possible. This paradigm shift has positively impacted a tremendous number of fields with a giant leap forward in computer vision and computer graphics algorithms. The development of public libraries such as TensorFlow are in a large part responsible for the massive growth of AI. These libraries made deep learning easily accessible to every researchers and engineers allowing fast advances in developing deep learning techniques in the industry and academia.

Tutorial structure

The tutorial is divided in two parts. The first part is a hand-on tutorial on TensorFlow 2.0, with content ranging from convolutions to notebooks implementing recent state of the art models. This first part comes with a lot of materials for the audience to experiment with during and after the tutorial. The second part introduces TensorFlow Graphics, a library bringing a wide array of computer graphics components to machine learning (renderers etc.) followed by presentation of state of the art research built on top of TensorFlow Graphics, followed by a discussion about exciting directions for future research.

Part 1: Introduction to deep learning with TensorFlow 2.0 – Josh Gordon, Paige Bailey

This section will be a practical, hands-on tutorial on TensorFlow 2.0 with a focus on best practices. The goal is to help you get started with TensorFlow and Deep Learning efficiently and effectively, so you can continue learning on your own. We will introduce key concepts in Deep Learning as we go. During this part of the course, we will guide attendees through writing several flavors of neural networks, beginning with the basics (fully connected, then convolutional, then recurrent), and continuing on to GANs. As we go, we will interleave slides that introduce concepts, e.g., softmax, with code written in TensorFlow 2.0.

Part 2: Graphics Inspired differentiable layers – Julien Valentin, TBD, TBD

Although our world is inherently 3D, TensorFlow’s design has been originally crafted around deep learning for 2D problems, i.e. images, which makes research, development and productization less effective than it could be when tackling three dimensional problems. During the last few years, we have seen a rise in the research of novel differentiable graphics and geometric layers that can be inserted in a standard neural network architecture to process three dimensional data. From spatial transformers to differentiable graphics renderers, these new layers allow to use the knowledge and expertise acquired in years of computer vision and graphics research for rapid design of novel network architectures, as well as the development of innovative products and research. The range of applications is broad and includes depth-estimation, scene flow estimation, facial reenactment, 3D scene understanding and generative modelling and tracking. We will introduce TensorFlow Graphics: an open-source library containing a set of graphics inspired layers and explain how these layers can be used to implement many recent structured neural network architectures allowing to solve various perception tasks. While presenting these novel layers, we will also show step by step how a differentiable graphics and geometry pipeline can be implemented inTensorFlow and how this library can help in this endeavor.