Francesca Lazzeri on What You Should Know before Deploying ML in Production –

Live Webinar and Q&A – 5 Technical Lessons Learned from Outages at AWS, Google and Microsoft (Live Webinar Dec 9th, 2021) Register Now
Facilitating the spread of knowledge and innovation in professional software development

The Continuous Documentation methodology is a useful paradigm that helps ensure that high-quality documentation is created, maintained, and readily available. Code Walkthroughs take the reader on a “walk” — visiting at least two stations in the code — describe flows and interactions, and often incorporate code snippets.
Lucas Cavalcanti explains the architecture decisions taken throughout the lifecycle of Nubank, from the very beginning until the current days.
In the podcast, Rosaria Silipo talks about the emerging trends in deep learning, with focus on low code visual programming to help data scientists apply deep learning techniques without having to code the solution from scratch.
In this podcast Shane Hastie, Lead Editor for Culture & Methods, spoke to Nick Iovacchini of Kettle about making hybrid working work in the post-pandemic world and the ability to put people space and time together in ways that create successful experiences for employees and organisations.
Rajiv Kapoor, Clint Gibler, André Tehrani, Anastasiia Voitova, and Erik Costlow discuss how to integrate security into DevOps, where their concerns are and how each is addressed.
Learn from practitioners driving innovation and change in software. Attend in-person on April 4-6, 2022.
Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.
InfoQ Homepage News Francesca Lazzeri on What You Should Know before Deploying ML in Production
Nov 16, 2021 3 min read
Anthony Alford
At the recent QCon Plus online conference, Dr. Francesca Lazzeri gave a talk on machine learning operations (MLOps) titled "What You Should Know before Deploying ML in Production." She covered four key topics, including MLOps capabilities, open source integrations, machine-learning pipelines, and the MLFlow platform.
Dr. Lazzeri, a principal data scientist manager at Microsoft and adjunct professor of AI and Machine Learning at Columbia, began by discussing several challenges encountered in the lifecycle of a ML project, from collecting and cleaning large datasets, to tracking multiple experiments, to deploying and monitoring models in production. She covered four main areas data scientists and engineers should consider. First, she outlined several MLOps capabilities for managing models, deployments, and monitoring. Next she discussed several open-source tools for deep learning and for managing machine-learning pipelines. Finally, she gave an overview of an open-source machine-learning platform called MLFlow. In the post-presentation Q&A session, Dr. Lazzeri cautioned against thinking of MLOps as a "static tool." Instead, she said:
MLOps is more about culture and thinking on how you can connect different tools in your end-to-end development experience…and how you can optimize some of these opportunities that you have.
The talk began with a discussion of some of the challenges with developing and deploying ML applications. The models trained by ML require large amounts of data, and tracking and managing these datasets can be difficult. There is also the challenge of feature engineering: extracting and cataloging the features in the datasets. Training an accurate model can require many experiments with different model architectures and hyperparameter values, which must also be tracked. Finally, after the model is deployed to production, it must be monitored. This differs from monitoring conventional web apps: in addition to standard performance data such as response latency and exceptions, model predictions must be measured against ground truth, with the entire lifecycle process repeated if the real-world data drifts from the originally collected data.
MLOps can help data scientists and engineers to see the challenges as opportunities. Dr. Lazzeri listed seven important MLOps capabilities:
She then discussed several open-source packages that can help with these capabilities. First, she mentioned three popular frameworks for training models: PyTorch, TensorFlow, and Ray. Dr. Lazzeri noted that in a survey of commercial users, TensorFlow was used by about 60% and PyTorch around 30%. She mentioned that Ray has many features specialized for reinforcement learning, although some are in beta or even alpha status. She also mentioned two frameworks for interpretable and fair models: InterpretML, which can train explainable "glass box" models or explain black box ones, and Fairlearn, a Python package for detecting and mitigating unfairness in models. Dr. Lazzeri also recommended Open Neural Network Exchange (ONNX), an interoperability framework that allows models trained in various frameworks to be deployed on a wide variety of hardware platforms.
Next, Dr. Lazzeri discussed ML pipelines, which manage data preparation, training and validating models, and deployment. She outlined three pipeline scenarios and their recommended open-source framework: Kubeflow, for managing a data-to-model pipeline, Apache Airflow for managing a data-to-data pipeline, and Jenkins, for managing a code-to-service pipeline. Each scenario has different strengths and can appeal to a different persona: Kubeflow for data scientists, Airflow for data engineers, and Jenkins for developers or DevOps engineers. Finally, she gave an overview of MLFlow, an open-source platform for managing the end-to-end ML lifecycle. MLFlow has components for tracking experiments, packaging code for reproducible runs, deploying models to production, and managing models and associated metadata.
The session concluded with Dr. Lazzeri answering questions from the audience. Several members asked about ONNX. Dr. Lazzeri noted that in her survey, about 27% of respondents were using ONNX; she also noted that models from Ray and PyTorch both perform well on ONNX. She recommended the use of automated machine learning (AutoML) as a good solution for helping developers scale their model training. She concluded by noting that although there are tools that can help, monitoring the accuracy of ML in production is still somewhat manual.

Redis Enterprise is an in-memory database platform built by the people who develop open-source Redis. Get Started.
A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.
You need to Register an InfoQ account or or login to post comments. But there’s so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.
Uncover emerging trends and practices from the world’s most innovative software professionals. QCon London is a conference for senior software engineers, architects and team leads.
Deep-dive with world-class software leaders on the patterns, practices, and use cases leveraged by the world’s most innovative software professionals. and all content copyright © 2006-2021 C4Media Inc. hosted at Contegix, the best ISP we’ve ever worked with.
Privacy Notice, Terms And Conditions, Cookie Policy

Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo