NVIDIA DLI WORKSHOPS

2020

Covid-Sars-2 presents us with new challenges. For this reason we decided to move our workshops online!
If you are a student or researcher interested in learning the Fundamentals of Deep Learning and Accelerated Computing, no matter where you live, sign up for one of the upcoming events. We are waiting for you!

NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning and accelerated computing.

NVIDIA DLI and University of Salerno (Dept. of Innovation Systems) are excited to announce, for the third year in a row, the 2020 series of practical Deep Learning and Accelerated Computing workshops exclusively for verifiable academic students, staff, and researchers.

Program

Explore the fundamentals of deep learning by training neural networks and using results to improve performance and capabilities.

In this workshop, you’ll learn the basics of deep learning by training and deploying neural networks. You’ll learn how to:

  • Implement common deep learning workflows, such as image classification and object detection

  • Experiment with data, training parameters, network structure, and other strategies to increase performance and capability

  • Deploy your neural networks to start solving real-world problems

Upon completion, you’ll be able to start solving problems on your own with deep learning.

Where:

  • Nov 30, 08h30 CEST - ONLINE

  • Feb 27, 10h30 - Mediacom

More info

Learn the latest deep learning techniques to understand textual input using natural language processing (NLP). You’ll learn how to:

  • Understand how text embeddings have rapidly evolved in NLP tasks such as Word2Vec, recurrent neural network (RNN)-based embeddings, and Transformers

  • See how Transformer architecture features, especially self-attention, are used to create language models without RNNs

  • Use self-supervision to improve the Transformer architecture in BERT, Megatron, and other variants for superior NLP results

  • Leverage pre-trained, modern NLP models to solve multiple tasks such as text classification, NER, and question answering

  • Manage inference challenges and deploy refined models for live applications

Where:

  • Dec 12, 08h30 CEST - ONLINE

  • Jul 22, 10h00 - University of Salerno, Dept. of Innovation Systems

More info

This workshop explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips.

Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by:

  • Implementing deep learning workflows like image segmentation and text generation

  • Comparing and contrasting data types, workflows, and frameworks

  • Combining computer vision and natural language processing

Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs.

Where:

  • Dec 14, 08h30 CEST - ONLINE

  • Jan 10, 10h00 - University of Sannio, Dept. of Engineering

More info


The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. Experience C/C++ application acceleration by:

  • Accelerating CPU-only applications to run their latent parallelism on GPUs

  • Utilizing essential CUDA memory management techniques to optimize accelerated applications

  • Exposing accelerated application potential for concurrency and exploiting it with CUDA streams

  • Leveraging command line and visual profiling to guide and check your work

Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

Where:


More info


This workshop explores how to use Numba—the just-in-time, type-specializing Python function compiler—to accelerate Python programs to run on massively parallel NVIDIA GPUs. You’ll learn how to:

  • Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)

  • Use Numba to create and launch custom CUDA kernels

  • Apply key GPU memory management techniques

  • Upon completion, you’ll be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.

Where:


More info