Skip to content

Comparing ONNX model performance of GPU vs CPU

This tutorial the difference in performance of an ONNX model on GPU and CPU.

Download link for necessary files: Notebook.

How does it work?

Step 1: Login to your UbiOps account at https://app.ubiops.com/ and create an API token with project editor rights. To do so, click on Permissions in the navigation panel, and then click on API tokens. Click on Add token to create a new token.

Creating an API token

Give your new token a name, save the token in safe place and assign the following role to the token: project editor. This role can be assigned on project level.

Step 2: Download the Notebook folder and open onnx-cpu-gpu.ipynb. In the notebook you will find a space to enter your API token and the name of your project in UbiOps. Paste the saved API token in the notebook in the indicated spot and enter the name of the project in your UbiOps environment. This project name can be found in the top of your screen in the WebApp.

Step 3: Run the Jupyter notebook onnx-cpu-gpu and everything will be automatically deployed to your UbiOps environment! Afterwards you can explore the code in the notebook or explore the application in the WebApp.