Skip to content

Deploying Mistral 7B to UbiOps

Download notebook View source code

This tutorial will help you create a cloud-based inference API endpoint for the Mistral-2-7B-Instruct model, using UbiOps. The Mistral version we will be using is already pretrained and will be loaded from the Huggingface Mistral AI library. The model has been developed by Mistral AI.

Mistral 7B is a collection of language model engineered for superior performance and efficiency. Mistral AI claims that the Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks. The model deployed in this tutorial is a fine-tuned version of the Mistral 7B

On this page we will walk you through:

  1. Connecting with the UbiOps API client
  2. Creating a code environment for our deployment
  3. Creating a deployment for the Mistral 7B model
  4. Calling the Mistral 7B deployment API endpoint

Mistral-7B is a text-to-text model. Therefore we will make a deployment which takes a text prompt as an input, and returns a response:

Deployment input & output variables Variable name Data type
Input fields prompt string
Output fields response string

Note that we deploy to a GPU instance by default, which are not accessible in every project. You can contact us about this. Let's get started!

1. Connecting with the UbiOps API client

To use the UbiOps API from a notebook, we need to install the UbiOps Python client library.

!pip install --upgrade ubiops

To set up a connection with the UbiOps platform API we need the name of your UbiOps project and an API token with project-editor permissions.

Once you have your project name and API token, paste them below in the following cell before running.

import ubiops
from datetime import datetime

API_TOKEN = "<API TOKEN>"  # Make sure this is in the format "Token token-code"
PROJECT_NAME = "<PROJECT_NAME>"  # Fill in your project name here

DEPLOYMENT_NAME = f"mistral-7b-{}"

# Initialize client library
configuration = ubiops.Configuration(host="")
configuration.api_key["Authorization"] = API_TOKEN

# Establish a connection
client = ubiops.ApiClient(configuration)
api = ubiops.CoreApi(client)

2. Setting up the environment

Our environment code contains instructions to install dependencies.

environment_dir = "environment_package"
ENVIRONMENT_NAME = "mistral-7b-environment"
%mkdir {environment_dir}

We first write a requirements.txt file, which contains the Python packages that we will use in our deployment code

%%writefile {environment_dir}/requirements.txt
# This file contains package requirements for the environment
# installed via PIP.

Next we add a ubiops.yaml to set a remote pip index. This ensures that we install a CUDA-compatible version of PyTorch. CUDA allows models to be loaded and to run GPUs.

%%writefile {environment_dir}/ubiops.yaml

Now we create a UbiOps environment. We select Python3.9 with CUDA pre-installed as the base environment if we want to run on GPUs. If we run on CPUs, then we use python3-9.

Our additional dependencies are installed on top of this base environment, to create our new custom_environment called mistral-7b-environment.

api_response = api.environments_create(
        # display_name=ENVIRONMENT_NAME,
        base_environment="python3-9-cuda",  # use python3-9 when running on CPU
        description="Environment to run Mistral 7B from Huggingface",

Package and upload the environment files.

import shutil

training_environment_archive = shutil.make_archive(
    environment_dir, "zip", ".", environment_dir

3. Creating a deployment for the Mistral 7B model

Now that we have created our code environment in UbiOps, it is time to write the actual code to run the Mistral-7B-Instruct-v0.2 model and push it to UbiOps.

As you can see we're uploading a file with a Deployment class and two methods: - __init__ will run when the deployment starts up and can be used to load models, data, artifacts and other requirements for inference. - request() will run every time a call is made to the model REST API endpoint and includes all the logic for processing data.

Separating the logic between the two methods will ensure fast model response times. We will load the model from Huggingface in the __init__ method, and code that needs to be ran when a call is made to the deployment in the request() method. This way the model only needs to be loaded in when the deployment starts up.

deployment_code_dir = "deployment_code"
!mkdir {deployment_code_dir}
%%writefile {deployment_code_dir}/
# Code to load from huggingface
The file containing the deployment code needs to be called '' and should contain a 'Deployment'
class a 'request' method.

import os
import ubiops
import torch
import shutil
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, BitsAndBytesConfig

class Deployment:

    def __init__(self, base_directory, context):
        Initialisation method for the deployment. Any code inside this method will execute when the deployment starts up.
        It can for example be used for loading modules that have to be stored in memory or setting up connections.

        print("Initialising deployment")

        model_id = os.environ["model_id"]

        gpu_available = torch.cuda.is_available()
        print("Loading device")
        self.device = torch.device("cuda") if gpu_available else torch.device("cpu")
        print("Device loaded in")

        bnb_config = BitsAndBytesConfig(

        print("Downloading model")
        self.model = AutoModelForCausalLM.from_pretrained(model_id,
                                                          quantization_config = bnb_config,

        print("Downloading tokenizer")
        self.tokenizer = AutoTokenizer.from_pretrained(model_id)

    def request(self, data):
        Method for deployment requests, called separately for each individual request.

        prompt = data["prompt"]

        model_inputs = self.tokenizer([prompt], return_tensors="pt").to(self.device)

        # Here we set the GenrerationConfig to parameteriz the generate method
        generation_config = GenerationConfig(
            temperature = 1.0,

        print("Generating output")
        generated_ids = self.model.generate(**model_inputs, 
                                            generation_config = generation_config,
        response = self.tokenizer.batch_decode(generated_ids)[0]

        # Here we set our output parameters in the form of a json
        return {"response": response}

Create a UbiOps deployment

Create a deployment. Here we define the in- and outputs of a model. We can create different deployment versions

# Create the deployment
deployment_template = ubiops.DeploymentCreate(
    input_fields=[{"name": "prompt", "data_type": "string"}],
    output_fields=[{"name": "response", "data_type": "string"}],

api.deployments_create(project_name=PROJECT_NAME, data=deployment_template)

Create a deployment version

Now we will create a version of the deployment. For the version we need to define the name, the environment, the type of instance (CPU or GPU) as well the size of the instance.

# Create the version
version_template = ubiops.DeploymentVersionCreate(
    maximum_idle_time=600,  # = 10 minutes

    project_name=PROJECT_NAME, deployment_name=DEPLOYMENT_NAME, data=version_template

Package and upload the code

# And now we zip our code (deployment package) and push it to the version

import shutil

deployment_code_archive = shutil.make_archive(
    deployment_code_dir, "zip", deployment_code_dir

upload_response = api.revisions_file_upload(

# Check if the deployment is finished building. This can take a few minutes

We can only send requests to our deployment version, after our environment has finished building.

NOTE: Building the environment might take a while as we need to download and install all the packages and dependencies. We only need to build our environment once: next time that we spin up an instance of our deployment, we won't need to install all dependencies anymore. Toggle off stream_logs to not stream logs of the build process.

Create an environment variable

Here we create an environment variable for the model_id, which is used to specify which model will be downloaded from Huggingface. If you want to use another version of Mistral you can replace the value of MODEL_ID in the cell below, with the model_id of the model that you would like to use.

MODEL_ID = "mistralai/Mistral-7B-Instruct-v0.2"  # You can change this parameter if you want to use a different model from Huggingface.

api_response = api.deployment_version_environment_variables_create(
        name="model_id", value=MODEL_ID, secret=False

4. Calling the Mistral 7B deployment API endpoint

Our deployment is now ready to be requested! We can send requests to it via the deployment-requests-create or the batch-deployment-requests-create API endpoint. During this step a node will be spun up, and the model will be downloaded from Huggingface. Hence why this step can take a while. You can monitor the progress of the process in the logs. Subsequent results to the deployment will be handled faster.

data = {
    "prompt": "Tell me a joke",

    project_name=PROJECT_NAME, deployment_name=DEPLOYMENT_NAME, data=data, timeout=3600

So that's it! You now have your own on-demand, scalable Mistral-7B-Instruct-v0.2 model running in the cloud, with a REST API that you can reach from anywhere!