What is Machine Learning Operations (MLOps) and why do you need it?

MLOps (Machine Learning Operations) is the set of practices, processes and principles that help us in deploying, running and maintaining machine learning algorithms at scale in live production environments.

From decades of experience in developing and managing complex software products, many DevOps best-practices and processes originated. In the end, data science and machine learning is also software, but with a few quirks that make it more difficult to manage than regular software products. The main reason being that its performance and behavior not only relies on its source code, but also on the data used to train it and the data that flows through it. This means that there are different moving parts that need to be controlled and maintained. The complexity that ensues from this asks for a new set of DevOps processes and tools, specifically for managing Machine Learning products. This we call MLOps.

At the core of MLOps are Continuous Integration, Continuous Deployment and the new Continuous Training Augmented with elements such as testing, monitoring and version control of code, models and data.

The most important phases at the core of MLOps are:

 

 

  • Experimentation & Training

    The iterative process of experimenting with different models, features and parameters to develop the right algorithm for the business case. Not really part of MLOps, but it is important to keep track of which datasets and features are used to produce the outcomes.

  • Deployment (CI/CD)

    The model or pipeline is packaged and pushed to a serving environment. Often a model registry with version control is included with metadata of the source control from data, code and model artifacts.

  • Testing & Validation

    This could be seen as part of the deployment step, but it involves validation of the model performance on live data before promoting it to production. This testing step of ML software is more complex because the combination of code and data defines the performance.

  • Model Serving

    Running the model live on incoming data. Often this takes the form of a prediction (micro)service which is exposed to the environment for processing data. Tooling needed for scaling and orchestration of multiple models simultaneously can get quite advanced.

  • Monitoring

    Continuously keeping track of changes in model performance and data distribution to ensure the model is performing according to its initial benchmark. This could also include metrics around the explainability of ML models.

  • Maintaining

    Models can degrade over time. Therefore, when performance becomes lower it might be necessary to retrain the model on new data. This can be done manually (going back to step 1), or automatically by executing a (re-)training pipeline.

MLOps is currently gaining momentum in the data science community, as the figure (Google Trends) proves. Many organizations now reach the point where their analytics initiatives are mature enough to be applied in real life. Compared to a few years ago when many companies were still exploring the space, discovering business cases and slowly building up machine learning capabilities, the reality is different now.

Interest over time – MLOps

Figure1. Interest over time – MLOps

It is important to realize that the processes and tools needed to implement MLOps in your team or organization are not set in stone. Each team can have different needs which calls for a standard stack of interoperable niche tools. An interesting development in this space is the AI Infrastructure Alliance , stimulating the rise of a canonical stack of ML Ops tools.

The MLOps space will see a lot of development and growth over the next few years as the reliable operation of ML within organizations becomes critical for delivering the expected return on investment for advanced analytics.

Relevant sources:

 

 

About UbiOps

UbiOps is an easy-to-use deployment and serving platform. It helps you turn your Python & R models and scripts into web services, allowing you to use them from anywhere at any time. So you can embed them in your own applications, website or data infrastructure. Without having to worry about security, reliability or scalability. Thereby it can be a valuable tool for getting ahead in your MLOps journey.

UbiOps is built to be as flexible and adaptive as you need it to be for running your code, without losing on performance or security. We’ve designed it to fit like a building block on top of your existing stack, rather than having to make an effort to integrate it with your product. It lets you manage your models from one centralized location, offering flexibility and governance.

You can find more technical information in our documentation -> www.ubiops.com/docs 

To help you getting up to speed with UbiOps, we have prepared some examples and quickstart tutorials -> https://ubiops.com/docs/quickstart/

 

Latest news

Turn your AI & ML models into powerful services with UbiOps