Until recently, Machine Learning has been mostly applied in industry by consulting academics, data scientists within larger companies, and a number of dedicated Machine Learning research labs within a few of the world’s most innovative tech companies. Over the last few years we have seen the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, with the aim of democratizing access to the benefits of Machine Learning. All these efforts have revealed major hurdles to ensuring the continual delivery of good performance from deployed Machine Learning systems. These hurdles range from challenges in MLOps, to fundamental problems with deploying certain algorithms, to solving the legal issues surrounding the ethics involved in letting algorithms make decisions for your business.
This workshop will invite papers related to the challenges in deploying and monitoring ML systems. It will encourage submission on:
subjects related to MLOps for deployed ML systems, such as
testing ML systems,
debugging ML systems,
monitoring ML systems,
debugging ML Models,
deploying ML at scale;
subjects related to the ethics around deploying ML systems, such as
ensuring fairness, trust and transparency of ML systems
providing privacy and security on ML Systems;
useful tools and programming languages for deploying ML systems;
specific challenges relating to
deploying reinforcement learning in ML systems
and performing continual learning and providing continual delivery in ML systems;
and finally data challenges for deployed ML systems
Makerere University and Google
"Deploying Machine Learning Models in a Developing World Context"
Amazon
"Successful Data Science in Production Systems: It’s All About Assumptions"
MIT
"System-wide Monitoring Architectures with Explanations"
Facebook AI and Inria
"Conservative Exploration in Bandits and Reinforcement Learning "
ABSTRACT: A major challenge in deploying machine learning algorithms for decision-making problems is the lack of guarantee for the performance of their resulting policies, especially those generated during the initial exploratory phase of these algorithms. Online decision-making algorithms, such as those in bandits and reinforcement learning (RL), learn a policy while interacting with the real system. Although these algorithms will eventually learn a good or an optimal policy, there is no guarantee for the performance of their intermediate policies, especially at the very beginning, when they perform a large amount of exploration. Thus, in order to increase their applicability, it is important to control their exploration and to make it more conservative.Startup on ML production pipelines
"Bridging the gap between research and production in machine learning"
Zhenwen Dai (Spotify): Model Selection for Production Systems
Erick Galinkin (Montreal AI Ethics Institute and Rapid7): Green Lighting ML
Camylle Lanteigne (MAIEI and McGill University): SECure: A Social and Environmental Certificate for AI Systems Deploy machine learning models serverlessly at scale
Yuzhui Liu (Bloomberg): Deploy machine learning models serverlessly at scale
Alexander Lavin (Augustus Intelligence): ML lacks the formal processes and industry standards of other engineering disciplines.
Alexander Lavin (Augustus Intelligence): Approaches to AI ethics must consider second-order effects and downstream uses, but how?
Mind Foundry
Mind Foundry
University of Cambridge