As MLOps Hits Maturity, It’s Time to Consider Cybersecurity

Pushing your ML project to production? Here’s what to look out for

Written in collaboration with James Stewart from TrojAI

In the last few years, there has been a lot of development in the field of MLOps. New tools for supporting various operational tasks of the MLOps life cycle have popped up, and more ML projects are starting to make it to production. With these advancements on the operations side hitting maturity, the focus is now shifting to ensuring that your MLOps projects are also safe and secure. This is a natural evolution of any technology and is needed to create even more momentum since model trust is the number one concern for deploying mission-critical systems.

In the traditional software development world, cybersecurity is a well-researched and discussed topic. On the MLOps side though, the topic doesn’t seem to be discussed much outside of the academic world. I personally believe that cybersecurity deserves more attention in the model development life cycle. Through my work at UbiOps, and its collaboration with the Dutch National Cyber Security Center, I have learned a lot about cybersecurity in the past several months. In the following sections, I will share what I’ve learned about how vulnerabilities are introduced in MLOps, and what can be done to mitigate them.

To make sure I’m not missing any important points, I have partnered up with James Stewart for this article. James works at TrojAI, which is a robustness and security platform for MLOps, so he’s always on top of the latest cybersecurity trends!

Table of contents

Why is cybersecurity important in MLOps?

Cybersecurity in the different phases of the MLOps life cycle

Data collection and preparation

Model creation

Model evaluation and optimization

Model deployment and model usage

Model monitoring

Conclusion

References

Why is cybersecurity important in MLOps?

With every new capability, comes a new vulnerability as well. With more projects making it into production, adversarial machine learning (ML’s version of malware) has started to grow rapidly. A recent study from NCC group highlighted that organisations are increasingly using ML models in their applications without considering security requirements . Hackers appear more than happy to exploit that lack of security, which has resulted in Tesla’s being tricked into veering into the wrong lane , Microsoft’s chatbot becoming racist only hours after deployment , and Cylance’s antivirus software defeated via ML vulnerability , to name a few. In fact, there are now dedicated resources tracking attacks on ML including MITRE ATLAS and the AI Incident Database.

Recent studies on the topic of adversarial machine learning provide some troubling insights:

  • Training an ML system using sensitive data seems to be fundamentally risky. Data used to train a system can often be recovered by attackers with various techniques
  • Neural networks can be easily forced into misclassifications with adversarial examples . Countermeasures do exist , but are prone to introduce other vulnerabilities into the system .
  • It is possible to extract high fidelity copies of trained models by exercising the model and observing the results

Luckily these risks can be reduced with proper security measures, although there is no silver bullet to guard against all possible vulnerabilities. If you are moving ML models into production, it’s definitely worth investigating your security practices and checking what sorts of attacks are the most important to guard against for your use case or business.

Cybersecurity in the different phases of the MLOps life cycle

If you are working in MLOps, I assume you are familiar with the MLOps (or data science) life cycle. We will use this life cycle to walk you through best practices that can help mitigate the sorts of vulnerabilities that can arise at each step.

Cybersecurity in the different phases of the MLOps life cycle

Image by author

Data collection and preparation

In addition to the normal practices typically followed for the collection, cleaning, processing and storing of data during traditional model development, there are several additional considerations around security that should be top of mind to avoid vulnerabilities like:

  • data poisoning (external/internal/backdoor)
  • annotation issues while protecting sensitive information
  • ensuring the data processing software stack is properly maintained

It is widely accepted that data and annotations must be of high quality for a model to have success but we also need to ensure that this data can be trusted.

Data collection and preparation
Image by author

There are several questions that should be considered to ensure the security of data and annotations and to mitigate against supply chain vulnerabilities. For example, we should consider how data is collected and whether or not it is coming from an internal or external source. If external, is that source reputable? Regardless of the reputation, are there mechanisms in place for ensuring the data from these sources has been transferred securely? Internally, how is data being stored once obtained and how is access to data storage managed?

For data that is publicly available are there any licences for its use and have they been satisfied? What processes are in place for annotation pipelines as both the data and the teams evolve? How are new people onboarded and how is the quality of their work ensured early on?

Finally, what tools are in place to ensure that there are no adversarial attacks contained in the data, either naturally occurring or malicious?

The role of security stakeholders at this stage:

  • Providing and managing appropriate access controls to the data, annotations and other metadata stores.
  • Ensure mechanisms for external data collection and transfer are in place (e.g., checksums, source reputation, licences).
  • Identify potential software vulnerabilities with any automated data collection pipelines.

Model creation

The foundations of many machine learning model architectures already exist, and most companies will use an existing pre-trained model or data mining algorithm as a starting point and fine-tune it for their application/use case rather than reinvent the wheel. Best practices during model creation should include using only the official GitHub repositories for algorithms and models owned by the authors or institutions from which the academic literature originated, or whitelisted repositories verified by said organisations.

During this phase, it is expected that enough information about the purpose and candidacy of certain models will have already been solidified but it is important to revisit the security aspects and implications of any prior design decisions made. It is essential to rigorously assess potential solutions from a security standpoint before selecting candidate models as solutions. This is an iterative process that may uncover information requiring a reevaluation of some of the assumptions made during the design phase.

Be mindful of hidden security issues

Image by author

The role of security stakeholders at this stage:

  • Identify potential vulnerabilities with external publicly available pre-trained models.
  • Consuming only whitelisted official public repositories for any external models.
  • Supporting formal testing and defining risk scores for model usage and enforcing acceptable levels of risk.
  • Identify tools and audit processes to mitigate risk of using candidate models by highlighting insights into model and data integrity, explainability and robustness.

Model evaluation and optimization

The black-box nature of most machine learning models underscores a need for robustness and explainability at this stage to help with reducing the risk and potential vulnerabilities in using the model. The goal at this stage should be to give stakeholders the visibility they need for model risk management before the model is deployed to production.

Traditional model development relies on performance metrics such as aggregate accuracy, precision, recall and F1 scores as a means of evaluating candidacy of a model to be deployed. Unfortunately, these do not provide the full picture around model assurance, robustness or risk management, for example:

  • Accuracy metrics do not capture model behaviour with enough granularity.
  • The process assumes the model will always be queried in a certain, predictable way.
  • These metrics do not provide necessary insight into potential system-level security vulnerabilities.
  • They cannot in general, capture corner cases, nor can they control for the noisy or adversarial data (both naturally occurring and malicious) that one should expect to see in production.

Security teams should work with Quality Assurance teams to run a series of tests on the model to provide some level of assurance for robustness and other insights of a model. These insights should inform a risk profile associated with any candidate models that would be potentially deployed to a production environment.

Model input failure mitigation

Tests should check for model input failures as standard ML packages do not provide out of the box capability to check whether the data is valid before returning a result. This could lead to silent failures where the model produces erroneous outputs. Such tests might include:

  • Determining whether numeric values are within an acceptable range.
  • Ensuring that the model is invariant to data type conversions.
  • Detecting if features are missing from the input.
  • Identifying feasible inputs into a model and perturbing them for behavioural effects.
Always evaluate your model for measures beyond accuracy

Image by author

Bias and system performance

Create tests to check for technical validation of system performance, and bias especially over time or in differing environments. This could identify potentially harmful issues such as bias as well as provide insights towards necessary mitigation strategies leading to increased model trustworthiness. These tests could include:

  • Identifying and quantifying performance bias such as a discrimination towards protected features like race or gender, or other related proxy features.
  • Detecting hidden longer-term effects like distributional drift over time.
  • Exploring the possibility of unexpected discrepancies between data subsets.

Robustness failure mitigation

Create tests for robustness failures. These types of failures are not just problematic in an adversarial context; more broadly, they indicate that your model may not generalise well to new or unseen data. There are various model compliance, assessment and robustness tools that can be utilised to provide some foundations in this area, however organisations may need to build capacity with appropriate subject matter and domain expertise to support some of the more complex, open-ended tests.

Proactive model assessment and evaluation is critical to prevent incurring significantly larger costs down the line. At this point ensure that mechanisms for tracking identified issues or other risks are put in place and documented.

The role of security stakeholders at this stage:

  • Advocate and lead a cross-functional committee to evaluate task and model-specific tests to understand model assessment and robustness.
  • Support QA evaluation of these metrics and provide insights into how the results inform model inference security.
  • Apply a level of application security to the automated evaluation deployment and MLOps pipelines.
  • Define processes for audit trails of inference pipelines for candidate production models.

Model deployment and model usage

Model deployment is the first step in which the model gets introduced to the production environment. This typically entails access to more data, access to new data coming in, and sometimes also access for people outside of the organisation. The latter is the case when you put a recommender system behind your WebApp for instance. These changes mean that the attack surface of the model suddenly increases. Training datasets can be controlled for quality, but data coming in from external users is much harder to sanitise. This opens the door for:

  • Evasion attacks
  • Model theft
  • Privacy attacks
  • Code injections
  • Corrupted packages

To name a few. And then we haven’t even mentioned most of the general issues that the ML application as a whole can suffer from, like traditional infrastructure issues, web application bugs and DDoS attacks.

Evasion attacks (also called “adversarial examples”)

These are attacks where the model is fed a so-called “adversarial example”. An adversarial example looks like normal input to the human eye, but can completely throw off the model.

Evasion attacks also called adversarial examples

Figure depicting fast adversarial example generation for an evasion attack. The added noise causes the classifier to see the panda as a gibbon. Image by Goodfellow et al.

Video from stealth T-shirt at DEFCON. An adversarial example is used to throw off an object detection model in a smart camera, making the individuals undetectable by the model. Video from AdvBox

Evasion attacks are a very broad class of attacks making it difficult to discuss specific mitigations. We would rather advise you to consider the following:

  • Certain training techniques can make models more robust against adversarial examples.
  • If your model is opened up to the internet, consider adding authentication to make an online attack traceable

Model theft

Hackers can “steal” trained models that are behind an API by making enough requests to it. Various publications already highlight how this can be achieved surprisingly easily .

Luckily, it typically takes a lot of requests to be able to reverse engineer or steal the trained model. This can be blocked by implementing rate limits, limiting the number of requests a user can make at a time. Other things to take into account are to make sure the file systems storing your trained models are secure, and to consider configuring alerts for unusually heavy usage of the model.

Privacy attacks

Data used to train a system can often be recovered by attackers with various techniques, like membership inference, model inversion and training data extraction . Therefore it is imperative to try and remove as much sensitive data as possible when preparing your training data. Of course there is a balance at play here for how much data you need for a good enough model, and how much data you can still cut out to reduce privacy risks.

Differential privacy can prove helpful in mitigating privacy attacks, but typically come at the cost of being hard to implement . You can also consider changing your model training methods to ones that are known to be more resistant to privacy attacks. Do make sure to double check if these methods really still are resistant, since new papers are coming out invalidating some of the established methods.

We know we’ve mentioned rate limiting and authentication already but this can also help against privacy attacks! Most attacks start with unusual traffic to your models.

Code injections

Code injection example

Image by author, injection icon created by Freepik — Flaticon

Carefully crafted inputs to a model can cause external (malicious) code to be triggered. This is a problem that general software applications also face, and it is typically mitigated through input sanitization. However, in machine learning there is an additional case of code injection that one should pay special attention to: corrupted model artefacts.

There are a lot of model libraries on the internet nowadays, with pre-trained models people can download and use. This is great for sharing AI capabilities with companies that wouldn’t otherwise be able to use AI. On the flip side it creates a big opportunity for hackers to publish corrupted models that can inject code in the system of the end user. In the whitepaper by NCC group, they showcase how easy it is to sneak malicious code into model artefacts created by exploiting Scikit-Learn pickles, python pickle files, the PyTorch PT format, and more .

To mitigate this risk, downloaded models should be treated just like any other downloaded code; inspect the supply chain and scan the models for malware if possible.

Corrupted packages

With corrupted packages we mean packages and dependencies that contain corrupt code and might trigger unwanted behaviour upon use. Machine learning code typically makes use of several python packages, which can be corrupted just like any other software. Just a few months ago multiple Python libraries were caught leaking AWS credentials for instance .

The mitigation strategy for this is essentially the same as for the code injections mentioned previously. In addition, it’s a good idea to keep an eye on news outlets monitoring corrupt packages, like hacker sites. In the Netherlands the National Cyber Security Centre and “Cyberveilig Nederland” have newsletters about such security risks as well, which helps to stay on top of it. The cybersecurity organisations of your country probably have one as well.

Lastly: always make sure you pin your dependencies before deploying! This way you only have to check the exact version you are using, and any later versions that get hijacked will not affect you. When pinning versions it is important to keep an eye on updates though, especially if there are patches to known security issues.

Corrupted packages example

Image by author

Model monitoring

With the unpredictability of machine learning comes the fact that models can fail unexpectedly once they’ve made it into prediction. This has nothing to do with how good of a data scientist or ML engineer you are, it’s just a part of the process. The failure can be caused by many things, like too much data distribution drift, unforeseen edge cases that can’t be predicted, or any of the security issues mentioned in the previous sections.

Model monitoring example

Image by author

The most important thing here is that you have a procedure in place to handle failing models when they occur. This means both being able to detect failed models quickly, and to be able to patch them or take them down quickly. How this should look differs from organisation to organisation, but I can give you a few pointers that we use internally at UbiOps:

  • It should be possible to get a notification when a model fails
  • Deploying a new version should be quick. If your deployment process is inflated it will be difficult to patch up a model after the issue is identified.
  • It should be easy to keep an overview of how all your models are doing in production. Monitor things like data traffic, CPU usage (a spike might allude to a code injection), and data drift.

Conclusion

At every step of the model development life cycle new vulnerabilities can be introduced. While the number of cybersecurity threats faced by ML systems is rapidly growing, the number of possible mitigation strategies is catching up. Preventing hackers from attacking your ML systems will always be a cat and mouse game, but here are some pointers to take home:

  • If you open up your model to the internet via an API or web service: consider adding rate limits and authentication. While it might seem trivial these make it significantly more difficult for hackers to perform most of the attacks we mentioned.
  • Keep an eye on hacker and cybersecurity newsletters so you can stay on top of known threats
  • Carefully inspect packages and pre-trained models you take from the internet.
  • Double check the security of any third party tools you would like to use! Look for things like ISO certifications and frequent updates

If security is essential to your business and you’re wondering if your MLOps practices are secure enough, don’t hesitate to reach out to

 or myself! We both love talking about this topic and would be happy to think along with you.

References

C. Anley, Practical attacks on machine learning systems (2022), Whitepaper by NCC group

K. Hao, Hackers trick a Tesla into veering into the wrong lane (2019), MIT technology review

O. Schwartz, (2016), Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation, IEEE Spectrum.

J.P. Mello Jr., (2022) When machine learning is hacked: 4 lessons from Cylance, TechBeacon.

R. Shokri, M. Stronati, and V. Shmatikov. Membership Inference Attacks against Machine Learning Models (2017). In IEEE Symposium on Security and Privacy (S&P).

Yeom, S., Giacomelli, I., Fredrikson, M. et al., Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting (2018), IEEE 31st computer security foundations symposium (CSF). IEEE

N. Carlini, F. Tramer, E. Wallace, M. Jagielski et. al., Extracting training data from large language models (2021), in 30th USENIX Security Symposium

I. Goodfellow, J. Shlens, C. Szegedy, Explaining and Harnessing Adversarial Examples (2014), arXiv preprint

N. Narodytska, S. Kasiviswanathan, Simple Black-Box Adversarial Attacks on Deep Neural Networks (2017), CVPR Workshops. Vol. 2.

A. Madry, A. Makelov, L. Schmidt, et al. Towards Deep Learning Models Resistant to Adversarial Attacks (2017), arXiv preprint

L. Song, R. Shokri, P. Mittal, Privacy Risks of Securing Machine Learning Models against Adversarial Examples (2019), Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security

Y. Zhu, Y. Cheng, H. Zhou et al., Hermes Attack: Steal DNN Models with Lossless Inference Accuracy (2020), In 30th USENIX Security Symposium (USENIX Security 21)

Y. Zhang, R. Jia, H. Pei et al., The secret revealer: generative model-inversion attacks against deep neural networks (2020), In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

F. Tramer, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, Stealing Machine Learning Models via Prediction APIs (2016), In USENIX Security Symposium

D. Goodman, X. Hao, Y. Wang, et al., Advbox: a toolbox to generate adversarial examples that fool neural networks (2020), arXiv

S. Truex, L. Liu, M. Gursoy et al., Effects of differential privacy and data skewness on membership inference vulnerability (2019). In 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications

R. Lakshmanan, Multiple backdoored Python libraries caught stealing AWS secrets and keys (2022), The Hacker News

Latest news

Turn your AI & ML models into powerful services with UbiOps