Skip to content

Introduction to UbiOps

Web-app

UbiOps is developed for data scientists and teams who are looking for an easy and production-ready way to:

  • Deploy, train, and run your own ML and data science code
  • Deploy off-the-shelf LLM & GenAI models
  • Run helper functions and other data processing tasks

UbiOps takes care of containerization of your code, deploying it as a microservice with its own API endpoint, request handling and automatic scaling. There are also advanced features for creating data pipelines, version management, job scheduling, monitoring, security and governance.

Starter Tutorials

To get started right away, create an account and check our starter tutorial or follow the tour in the WebApp.

Model serving

  • Deployments
    Deployments are the entities in UbiOps that serve your models, functions and scripts for data processing. They can receive requests through their API endpoint to process data. Deployments run in containers tailored to the needs of your code. UbiOps offers runtimes for several Python versions. Deployments also have versions which you can use to keep track of model updates. Read more->

  • Pipelines (workflows)
    Pipelines are Workflow Management tools with which you can build workflows based on deployments. We provide operators, which are predefined pipeline objects that allow you to quickly add additional logic to your pipelines. Like deployments, pipelines can receive requests through their API endpoint. UbiOps will manage the data flow. Read more->

  • Requests A Request can be sent to both Deployments and Pipelines. It will trigger a single run of a deployment or pipeline using its data payload. You can send requests to the API endpoint of a deployment or pipeline. You can also schedule their execution. Read more->

Model Training

  • Experiment An experiment defines the training set-up that you will use. This includes the environment it should run in, what it should be called, and the hardware that you would like to use for your training runs. You can also configure where you want to store your model artifact and any other files that are generated during a training run. You can add a training experiment as an object to a pipeline.

  • Training runs Training runs are the actual code executions of a training job, and are tied to an experiment. You can run multiple training runs (in parallel) inside an experiment, which allows you to quickly try out different training codes and parameters. You can also easily evaluate the result of different training runs from different experiments in the WebApp. Note that all runs inside an experiment make use of the same environment & instance type (group).

Important concepts

  • Environments An environment defines the container image for your deployment or training job. It specifies which libraries, packages and dependencies are installed in the container. Once built, environments can be re-used across deployments and training jobs in your project. Read more->

  • Instance type Instances are the compute nodes on which your deployment or training job will run. Instance types determine the memory, vCPU, and storage allocation for your deployment, and whether or not your deployment or training job can make use of a GPU. The more memory you assign, the more CPU cores are available for your deployment version/training experiment. You can also determine the scaling and availability needs for deployment. Read more->

  • [Instance type groups] You can also create instance type groups of one or more different instance types. Instance type groups specify what kinds of instance types a deployment or experiment can run on. For example, if you have some on-premise GPUs that you want to use first, but you want to scale out to cloud based GPUs when they are all occupied, you can specify that in an instance type group.

Getting started

The best way to learn about UbiOps is to start using it. We have several resources to get you started:

After you have completed the starter tutorials you can have a look at the UbiOps tutorials, which can provide you with inspiration on how to work with UbiOps with ready-to-go deployments and notebooks.

We also provide How-To's, which are concise articles that cover fundamental platform operations, integrations, and provide guidance on more advanced tasks. Each article includes practical, ready-to-use code snippets that you can apply to your situation.

Ways of using UbiOps

There are multiple ways to interact with the UbiOps platform and API:

  • WebApp
    The UbiOps WebApp https://app.ubiops.com is an easy way of using UbiOps from your browser.

  • API
    You can also use our platform API directly. To generate an API Token to authenticate with the UbiOps API and your models, create a service user in your UbiOps account.

  • Python client library
    Our Python Client library provides an easy way of integrating UbiOps API methods in your code and for automating your workflow.

  • Command Line Interface
    With our Command Line Interface you can interact with the UbiOps platform API from your terminal.

All of these options offer access to the same functionality. You could, for example, create a new project using the Command Line Interface and edit it afterwards using the WebApp.