Lectures & Talks
This is a a collection of presentations and lectures that I have been involved with, related to AI/ML.
LLMOps: the Lifecycle of an LLM
The session provides advice and guidelines on creating long-term successful Generative AI applications.
The video recording and slides are now freely available here: https://aws.amazon.com/events/aws-innovate/apj/aiml-data
Click “Register now to watch on-demand” and go to the “Generative AI fundamentals” section.
MLOps with Amazon SageMaker
Amazon SageMaker is a managed ML platform that aims to streamline the whole, end-to-end Machine Learning (ML) lifecycle. In this series, we will dive deeper on the value proposition of Amazon SageMaker for Machine Learning Operations (MLOps). We start from the basics: what is MLOps, how it relates to DevOps and why we need it. We then gradually, dive deeper on different tools that Amazon SageMaker offers for MLOps, covering task orchestration, feature store, inference options, and MLOps architectural patterns. This series is designed for data engineers, data scientists, and developers with prior experience in Amazon SageMaker and basic knowledge of AWS services. You will need an AWS account in order to access this SkillsBuiler material.
Introduction to MLOps
This is an introductory session to MLOps, with zero assumptions about the viewers’ knowledge. In this session, you will learn about the concept of MLOps, how it relates to DevOps, as well as, the business benefits of adopting it. You will see MLOps from 3 different dimensions: people, processes and technology. Finally, you will learn about the MLOps Amazon SageMaker value proposition, and the tools that AWS offers in this domain.Orchestration tools
The ML lifecycle is an experimental, iterative process, with many different branches and workflows. In this session you will learn about the native AWS task orchestration tools. First, we will start with an overview of the most known MLOps orchestration tools (3rd party and AWS native). Then, we will dive deeper on AWS Step Functions and SageMaker Pipelines. You will learn how to create, manage and orchestrate ML workflows, in order to operationalize your ML projects.Feature Store
During the model development phase, long time is usually spent in feature engineering. In many cases, ML teams tend to re-compute features across different projects, leading to duplicated effort, reproducibility issues and slower time to market. In this session you will learn how Amazon SageMaker Feature Store can provide a solution to this, by offering a central repository of feature values. You will learn how to ingest data to online/offline stores, as well as, different architectural patterns for incorporating Feature Store to your ML projects.MLOps architectures
When it comes MLOps, there is no one universal implementation approach. In this session you will learn about different architectural patterns for MLOps, including single and multi-account approaches. You will also learn about SageMaker Feature Store, SageMaker Project and SageMaker Lineage Tracking, and how you can incorporate them into your MLOps architectures.Inference
Deploying a model to production, is not the end of the ML lifecycle. Specific actions have to take place in order to ensure that the model stays relevant, and avoid different types of drift. In this session, you will learn how to use SageMaker Model Monitor, in order to ensure that your models will remain free of data/model/bias/explainability drift. You will also learn how to conduct A/B testing on different model versions, using Amazon SageMaker’s managed endpoints.
Fine-grained Emotional Profiling
This was a talk presented in the 2016 ASEAN RE-WORK Deep Learning Summit. It presented my research in ADSC on Facial Expression Analysis, showcasing results both for dimensional (regression) and categorical (classification) expression estimation. [October 2016]
Integrating Amazon Textract with a human-in-the-loop pipeline
This was part of a full-day webinar on Computer Vision and Machine Learning (ML), that was held on 20 October 2020, for the ASEAN region. My lecture was part of the Technical track, and focused on how to combine ML models with human-in-the-loop pipelines. The example shown here refers to Amazon Textract (smart AI OCR and document analysis), but can be easily extended to any other ML applications. [October 2020]
Building Machine Learning Pipelines on AWS
In this session we'll introduce best practices, components and common architectures in modern ML Operations. We will see how the AWS ecosystem has evolved to support the different stages of an ML pipeline. [March 2021]
MLOps: Automating ML Workflows with Governance and Security
ML Operations is the methodology that enables data science teams to automate, orchestrate, monitor and deploy ML models at scale. In this session we will start by focusing on the practical issues that organizations face while trying to productionize ML systems. We will explore the set of tools that AWS offers and how they can remove the heavy lifting from your teams, reducing the total cost of ownership and allowing to innovate faster. We will also share ML Ops best practices, specific to governance and security, allowing developers and data science teams to build and experiment with their ML workloads at scale. [March 2021]
Build your AI-ready data platform
Discussion on the benefits of adopting a AI/ML platform approach, for data science teams. [March 2022]
Leveraging AI/ML and AWS Turnkey Solutions to accelerate innovation at scale
This is a session presented during the 2022 AWS ASEAN Partner Summit, focusing on how AWS partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions. [May 2022]. (You will have to register to access the recording).
https://summit-asean.virtual.awsevents.com/media/t/1_4lxyealz/253130333