Get started with UbiOps

We made several tutorials to get you up to speed. Happy deploying!

Introduction to UbiOps

Intuitive UI, key concepts, and more! The best way to learn about UbiOps is to start using it. We made several tutorials to get you up to speed.

Quickstart: deploy your first model with UbiOps

UbiOps enables you to very quickly turn algorithms into scalable, robust and secure end-to-end applications without requiring knowledge to set up cloud infrastructure, micro-services, automated scaling or DevOps practices

Quickstart: create your first pipeline with UbiOps

Pipelines allow users to define sequences of Deployments, by connecting the output of one deployment to the input of another deployment. This is useful when your application depends on a series of separate data transformations that need to operate in sequence.

Example Deployments

Example deployments contain ready-made deployment packages for typical use cases, which illustrate how to deploy your Python or R code in UbiOps. Besides the examples treated in the video’s below, you can find more inspiration in our cookbook:

Example deployment: deploying a handwriting recognition model

Image recognition apps are fairly straightforward to deploy on UbiOps and in this deployment package, you can see an example. It is a model that predicts handwritten digits. It takes a picture of a handwritten digit as input and returns its prediction of what digit it is.

Example deployment: deploying a simple number multiplication function

To illustrate the basic working of the required by UbiOps we have created a sample deployment that multiplies a given number by 2. You can download the deployment package as a zip (ready to be used) and follow the tutorial.

Example deployment: loading and running a pre-trained prediction model

Prediction models are a typical data science application and they are very straightforward to deploy on UbiOps. In the request method, we call model. prediction to actually make the prediction. This structure works for models created with Tensorflow, ScikitLearn or other standard data science libraries.