How to deploy YOLOv4 on UbiOps

Image recognition is a popular and important field within machine learning. Many models exist to for example classify objects within images. In this tutorial, we will take a look at one of these image recognition models called YOLOv4 (You only look once) and install it locally. Take a look at the paper and the website from the author. However, having this model installed on your local machine is actually not that useful. Except for some experimentation maybe.

These models will only show real value when you can deploy them in a robust and scalable production environment and then implement them into your web app for example. That’s why we are also going to deploy this model to UbiOps. If you want to know more about what is important in an AI production environment overall, you should read the ten commandments of MLops.

In this tutorial, you will:

  • Download a ready to go version of the YOLOv4 model
  • Install it locally
  • Adapt it for usage with UbiOps
  • Deploy YOLOv4 on UbiOps


  • Basic python knowledge
  • A working Python 3.7 installation
  • Free account on UbiOps

Getting the model

For this tutorial, we will use a pre-trained version of the YOLOv4 model from the internet. Pre-trained models are great because you can use them right away or as a starting point for your own specialized models. This saves a lot of time.

You can get the model from github.

You can download and extract the zip file or use git from the command line:

git clone

Then you have to download the weight files and add them to the directory of the model. If you have extracted the model to a directory called yolov4model for example then you have to put the weight files into yolov4model/data/yolov4.weights.

Setting up the model

After this go into the yolov4model directory and install all required dependencies. It is good practice to create a virtual environment for this. If you want to learn more about virtual environments in python read this documentation page. But for now you can simply run these commands.

From within the yolov4model directory:

python3 -m venv .  #setup virtual environment assuming you have python 3/7 installedpip install -r requirements.txt #install requirements python --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4  #setup the weights

Your Yolo model should now be ready to go. To test it:

python --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg

After this you should see something like the following image:


If so, great work you have just successfully installed YOLOv4 on your local machine.

Getting the model ready for UbiOps

Or get a prepared model ready for UbiOps.

Now that we have a working model we need to adapt it a bit so that UbiOps can use it. Now we have the with the functionality in it to classify images like we just did. UbiOpshowever expects a with a specific structure. You can find an in-depth explanation on our model structure documentation page.

For now, it is important to know that the needs to have a “model” class. This class needs to have 2 methods. The first method is the init method that contains everything that needs to be done before the model can start processing requests. The second method is the request method that contains the actual logic to process the request. UbiOps can also use a requirement.txt to automatically install dependencies, the original project already had one, so we are good to go.

Take some time to compare the original one and the one that I rewrote, you will see that they are actually quite similar. The rewriting took less than half an hour. Basically, I’ve put all the logic of into the request function of the and removed all the command line argument logic.

I then fed the image into the classifier using this line:

original_image = cv2.imread(data["image_input"])

And the output image using these lines:

cv2.imwrite('output.jpg', image)      
    return {'image_output': "output.jpg"}

Because in ubiops input data comes in through a dictionary called data and output data is returned from the model back to UbiOps as a dictionary again using the “return” function of the request method.

I also had to change one library. Because UbiOps does not provide a desktop environment and the opencv library actually expects this I changed opencv-python to opencv-python-headless. The headless version requires no desktop environment. See the “requirement.txt” for this change. Related to this I also removed the call, which requires a desktop environment. The last thing i did was adding a native OS dependency to the ubiops.yaml. Read more about it here

- libglib2.0-0

In order for UbiOps to accept your files it needs to be in a directory called model_package, so rename yolov4model to model_package. Now simply zip the whole model_package directory. Your zip should look something like this. Important is to have the file and the model_package directory.

Deploy the model to UbiOps

Now we have prepared our model. It is time to upload it to UbiOps using the UI. (if you are more of a terminal tiger take a look at our CLI and Client Libraries).

  1. Go to UbiOps and login.
  2. In the sidebar on the left go to Models -> Create.
  3. Set the Name for the model, for example yolov4, and a description (optional).
  4. Set Input type to structured
  • Add an input field image_input with data-type file

5. Set Output type to structured.

  • Add an output image_output with data-type file
Input and Output in UbiOps
Input and Output in UbiOps
  1. Click Next Step and then Confirm.
  2. Set the Language to Python 3.7.
  3. You can leave the rest of the settings on their defaults
  4. Upload the zipped model package and click on the Create button.

The model should be active after a few minutes and can be seen on the Models overview page.

You can test your newly created YOLOv4 model by going to the model version page.

  1. Click on Models in the sidebar on the left and then click on your model name(yolov4) and then on one of the versions (v1). You should now be on the model version page.
model version page in UbiOps
model version page in UbiOps

2. Now click CREATE DIRECT REQUEST and upload an image to UbiOps. Click Create to create the request.

3. When the model has finished processing you can click the notification in the right bottom named Results and see the results of your request.


We now have YOLOv4 running in UbiOps, ready to process requests. It was pretty easy, right? You should read the UbiOps model quick start if you want to learn about deploying models onto UbiOpsand start running all of your models in a professional production environment. If you want another challenge there is also a V5 of the YOLO model. However there is some controversy around it, so see it for yourself.

Latest news

Turn your AI & ML models into powerful services with UbiOps