Deep Learning meets (Astro)physics

A one day tutorial on neural nets, deep learning and TensorFlow 2.0, with applications to (astro)physics and science


January 22, 2020 @ Institute for Particle Physics and Astrophysics (ETH Zurich)

Machine Learning (ML), and in particular Deep Learning, is becoming an increasingly popular tool in science, and especially in (astro)physics. Recent examples include denoising of galaxies using generative adversarial networks, large-scale probabilistic programming for particle physics based on 3D-CNN-LSTMs, or finding exoplanets in Kepler data with convolutional neural networks.

We are therefore excited to announce a one day hands-on workshop on deep learning for (astro)physics, aimed mainly at scientists at ETH Zurich. Our goal for this tutorial-style workshop is to provide everybody a comfortable and beginner-friendly way to get started, see what the whole machine learning-hype is about, and understand if and how ML could also help them with their own research.

We explicitly invite people who currently have little to no background in machine learning, and promise to do our best to make it beginner-friendly and accessible for everybody! :)

The workshop will feature a morning and an afternoon session. Before lunch, we will give a theoretical introduction to the most important concepts of deep learning, such as "What is a neural network?", "How do you train it?" or "What architectures are suitable for which task?". There will also be four short impulse talks, in which our speakers will give concrete examples of how to apply deep learning to specific science use cases. In the afternoon, things will get more hands-on, and we will walk you through an interactive session to show you how to train your own neural networks using the latest version of Google's TensorFlow framework. It is, of course, also possible to only attend one of the two sessions.

If that sounds interesting to you, feel free to register for the workshop below. Also, if you have any additional questions or suggestions, please do not hesitate to contact us!


Note: We currently already have more registrations than available spots. If you want, you can still register, but for now we will place you on the waiting list in case of any cancellations.

The maximum number of participants is limited to 30. We will have a waiting list in case someone cancels their registration. We will contact all registered participants a few days before the workshop to explain the preparation work we would like you to do before the workshop (e.g., set up Python notebooks).

Preliminary Agenda

January 22, 2020 @ IPA (ETH Zurich)

08:30–09:00 — Reception & Official Welcome

───

09:00–09:45 — General Neural Network Theory

Feed forward neural networks, (stochastic) gradient descent, optimizers (Adam, RMSProp, etc.), learning rate, hyper-parameter tuning, metric analysis, and a small digression into metric spaces.

───

09:45–10:30 — Convolutional Neural Networks

Filters and kernels, convolution operation, pooling operation, feature engineering capabilities, known network architectures, advanced architectures (Inception, ResNet, etc.)

───

10:30–11:00 — Coffee Break

───

11:00–11:45 — Advanced Topics

Transfer learning, object detection and segmentation, using pre-ready networks, neural style transfer (aka, teaching a network to paint)

───

12:00–13:00 — Impulse Talks

Four short talks showcasing existing applications of deep learning to science

───

13:00–14:00 — Lunch Break

───

14:00–15:30 — Introduction to TensorFlow 2.0

Tutorial on the latest version of Google's popular deep learning framework

───

15:30–16:00 — Coffee Break

───

16:00–17:30 — Practical Examples

Hands-on session with Jupyter notebooks

Impulse Talks

Umberto Michelucci:

»Machine Learning in Fluorescence Spectroscopy«

Timothy Gebhard

»Deep learning frameworks beyond neural networks«

Francesca Venturini

»Applications of ML to oxygen sensing«

Markus Bonse

»Machine learning-based atmosphere prediction for adaptive optics«

Location

ETH Zurich

Institute for Particle Physics and Astrophysics (IPA)

Building HIL, Room D 10.2 (note: this has changed!)

Wolfgang-Pauli-Str. 27

8093 Zürich

Trainer

Umberto Michelucci

Umberto Michelucci is a cofounder and the chief AI scientist of TOELT LLC, a company aiming to develop new and modern teaching, coaching, and research methods for AI to make AI technologies and research accessible to every company and everyone. He authored two books: "Applied Deep Learning—A Case-Based Approach to Understanding Deep Neural Networks" and "Convolutional and Recurrent Neural Networks Theory and Applications". He publishes his research results regularly in leading journals and gives regular talks at international conferences. He is also the first and only Google developer expert in Switzerland and works actively with Google and gives training internationally.

Organizers & Contact

This workshop is organized by Timothy Gebhard, Prof. Dr. Sascha Quanz and Umberto Michelucci.

To get in touch, please use our respective institutional e-mail addresses.