Frequently Asked Questions
Questions about UbiOps? Here are the frequently asked questions.
Questions about UbiOps? Here are the frequently asked questions.
UbiOps is built for deployment, serving and management of your data science code. Although it can be done, UbiOps is not meant for training models. UbiOps provides monitoring on an infrastructure level(CPU, memory, etc) but not on a model level (precision, recall, F1, etc).
UbiOps can run any Python or R code, so you can use it for much more than only data science, ML&AI functions.
In our development, we always keep compatibility in mind. For UbiOps SaaS, users will get notified about upcoming releases at least 2 weeks in advance. This includes information about the maintenance window and compatibility. For UbiOps Enterprise and On-Premise customers, we will release and offer support depending on the SLA.
We built UbiOps such that you don’t have to worry about Kubernetes complexity. However, if you want, you have many knobs to tune about how your deployments run and scale. UbiOps will build a Docker container from your Python/R code and its dependencies and artifacts inside the deployment package. This container will be scheduled as a Kubernetes pod depending on the scaling settings [https://ubiops.com/docs/deployments/advanced-parameters/] so it gets assigned the right amount of memory and has the right availability. Scaling up the amount of instances is depending on the amount of requests and the existing workload.
In a pipeline, UbiOps takes care of routing the data between different deployments so you can run and scale more complex applications consisting of different steps. You can split your logic in multiple deployments and connect them. The deployments in a pipeline will scale individually, which is a great way to efficiently run your application. Pipelines can have versions too and get their own individual endpoint for requests. Check the technical docs.
UbiOps uses subscription-based pricing. Depending on your usage of UbiOps and the SLA you require, your monthly price will differ. You can purchase a package containing x amount of compute and resources per month, pay for additional usage or upgrade to a larger package. This differs between SaaS and on-prem solutions, as with SaaS your code runs on our GCP cluster and with On-Prem on your infrastructure. More info: https://ubiops.com/pricing-and-plans/
Yes, we do support SSO for Google and Microsoft (Active Directory)
Yes, we support installation in your own cloud environment of choice. Please contact us for more information.
If you want to automate deploying to UbiOps and setting up pipelines. Take a look at our CLI or Python/R clients.
That depends on if you use UbiOps SaaS or UbiOps On-prem. In case of SaaS, your code and models run on our GCP cluster that you interface with using the API. In case of On-Prem (whether that is on your Azure/AWS cloud or private cloud) UbiOps is a layer on top of your infrastructure and requires Kubernetes to be installed.
Our platform has an overhead of <150ms for requests. This is of course also dependent on network latency outside of the UbiOps infrastructure.
When UbiOps is installed in your own cloud environment, all the data traffic and code will stay within this environment and does not by default depend on external connections.
You do need a public internet connection to build models (using pip packages and UbiOps YAML).
Not yet, but we’re working on it. When it’s implemented we will inform our users.
See our installation guide.
UbiOps can run in an air-gapped environment. Although, if you need certain repositories like pipy you need to make these available locally too.
We do not offer specific training functionality. UbiOps focuses on deployment, orchestration and monitoring. However, thanks to the flexibility of UbiOps, you can easily deploy code which takes care of training your model and create a training pipeline using our pipeline functionality.
Yes, you can use the UbiOps CLI [https://github.com/UbiOps/command-line-interface] to deploy code from a (local) Git repository. You can also integrate with a CI/CD tool like Gitlab Pipelines or Jenkins to automate the deployment step after a code merge. For more information, see: https://ubiops.com/docs/tutorials/git-sync/
You can use our CLI to build and deploy a deployment package from a selected set of folders/files without changing the repo or project structure. To see how, please see: [https://ubiops.com/docs/tutorials/git-sync/]
UbiOps is specifically developed to bring models, pipelines, functions and scripts into operation as simple and intuitive as possible in a single environment. UbiOps focuses on the deployment, serving, orchestration and monitoring of your code while giving you the flexibility of using your own tooling for development. Now you can start creating reliable and robust solutions without thinking about servers, scaling, monitoring and other DevOps related tasks. Next, UbiOps provides you the tools and best practices to manage and monitor your models during their lifecycle.
Currently, we only track predefined metrics on models. It is on our roadmap to be able to monitor custom variables and make comparisons between different deployments and their versions.
The UbiOps runtime is restricted to invoking Python or R code. Matlab needs its own runtime (with license) so can not be installed inside the container. If you can install a language or dependency as APT package and invoke it from Python, it will work.
You can use our CLI to integrate with your existing CI/CD workflow from Gitlab, Github or other tools. Furthermore, UbiOps has the concept of deployment versions and version revisions. This way you can keep track of changes and synchronize UbiOps with the state of your code versioning.
UbiOps only stores and processes your data for handling requests to your models. Data passed to UbiOps is deleted afterwards when the request is finished. We only store the logs of your models and scripts that you run so you can retrieve it from the log explorer or through the API.
In case of the SaaS offering, UbiOps comes with the highest security standards. Both your data and models are encrypted at rest. You can opt two-factor-authentication. Your data traffic is encrypted with TLS. When used on-prem or in a VPC, the security is dependent on you.
UbiOps SaaS is hosted on Google Cloud, region europe-west4 (NL). If you choose UbiOps On Premises, you can use your own preferred Cloud provider or local server.
UbiOps comes with an intuitive User Interface (UI), API and Command-line interface (CLI) from which you can access all functionality. We also offer a Client Libraries to integrate UbiOps API methods within your own programs and scripts.
We will keep adding new functionality to UbiOps in periodic releases. You can find release notes on our website, or by subscribing to our newsletter here. In case of any downtime or compatibility issues with current or older versions, we will always notify you beforehand in detail. If you use UbiOps On Premise, we allow support for updating to the latest releases. But we do not force any updates, you’re in control.
Yes, UbiOps does! UbiOps is free for students for educational purposes for an unlimited amount of time. Students can benefit from an unlimited amount of projects and deployments. With UbiOps students can work together easily. The pipeline functionality easily connects different codes. Lastly, UbiOps is usable with more users. You can work on the same project with your group members! How to set up the student plan? 1. Create a UbiOps account on the website with your academic email address. 2. To update it to an academic account, please send an email to [email protected] from you academic email address. If you are willing to work together with other students on a project, please mention this. You will be added to one organization.
No. UbiOps will build the Docker image for you. Using our base image with the language you select, it will install packages from your requirements.txt and OS level APT packages from ubiops.yaml (see docs: https://ubiops.com/docs/deployments/deployment-package/ubiops-yaml/). It is currently not possible to upload your own Docker files.
We try to make it as easy as possible for data scientists and teams to deploy and serve their code, without compromising on security and scalaility. You can make deployments manually using our web UI, but you can also use our Python/R clients and CLI to automate your deployment work. With the various user management and permission options, it is easy to work in UbiOps as a team.
You can set several scaling parameters and settings on a deployment level. This includes memory allocation, min/max number of parallel instances (including scale-to-zero) and the TTL of your deployment after inactivity. We automated many things in the background for you to ensure you get the power of Kubernetes, without the hassle.
There are several ways you can do this. One way (in SaaS) is to split your dev, test and prod environments in different projects. When UbiOps is installed in your own environment, there is more flexibility and you can split the dev and production between different clusters for instance.
Yes, you can. Python packages you can install with a requirements.txt file in your deployment package. OS/Docker level (APT) packages, you can install using the ubiops.yaml [https://ubiops.com/docs/deployments/deployment-package/ubiops-yaml/]
All our images are based on Ubuntu20.04. Windows containers are currently not supported.
You can make use of our CLI and Python or R clients to integrate and automate your workflow.
You can create service users, API tokens, that you can assign roles and permissions depending on what you need. This allows you f.i. to give access to a specific model or pipeline.
We perform certain validation steps to see if your image builds without errors. Code specific tests you can include in your model package, or include in your CI/CD pipeline. We are working on extending the testing and validation options within UbiOps to support user-defined tests specific to your code and data.
The only requirement is to include a deployment.py file in your deployment package. This file needs to include a deployment class and a request() function. This function will be invoked when a request is made. You can extend your deployment package from here in any way you like. Please see our documentation [https://ubiops.com/docs/deployments/deployment-package/deployment-structure/] and deployment template [https://github.com/UbiOps/deployment-template] for more details. For R, please see [https://ubiops.com/docs/deployments/deployment-package/r-deployment-package/]
UbiOps has no built-in functionality for training and deployment, but you certainly can run (re-)training jobs and automate deployment with our Python/R client. See this example for instance [https://ubiops.com/docs/ubiops_cookbook/scikit-deployment/]
You can create database connections from a deployment. We also have templates for connecting to common database systems [https://ubiops.com/docs/data-connections/connectors/]. It basically comes down to creating the connection from the Python/R code in your deployment, which provides a lot of flexibility to implement database connections in the way you want.
Not yet, but we are currently working on supporting that. Do you have ideas and/or preferences on what you need in terms of GPU support? Join our slack (link) and get in touch, we are happy to discuss that with you.
Yes, UbiOps is built on top of Kubernetes and can be installed on AWS, GCP, Azure and OpenStack
Pipelines are sequences of deployments in which UbiOps handles the flow of data. This allows you to break down complex workloads into different steps that scale individually.
UbiOps is intended for inferencing. You could do training in UbiOps, but there are frameworks better suited for this in terms of experiment traceability and model selection.
We do not support conda at this moment. But it is on our roadmap.
Yes, you can access artifacts (if stored as blobs c.q. objects) through our API.
Yes, we have several free templates for models on our GitHub. We can’t enforce users to use a certain code structure currently.
UbiOps is well suited for any model that can process individual chunks of data without a strong dependency on state. There are thousands of examples that fit into this category, including but not limited to image processing, classification or recommendation models. But there is a lot possible with the pipeline and blob functionality within UbiOps, which for instance allows for video and time series processing too. [https://ubiops.com/docs/ubiops_cookbook/]
In our SaaS version, UbiOps is limited to 16GB of memory per container. In enterprise and local installations, this depends on the underlying servers and can be customized.
Currently, CPU is coupled to the amount of allocated memory. For every GB of memory, you get 1/4 vCPU.
Yes, you can run jobs up to 48 hours.
Not natively, but you can run a streaming connector (RabbitMQ for instance [link]) inside a deployment. However, it needs to be scheduled and the deployment restarted related to the TTL of the deployment. With long-running requests, running these polling connectors for longer times is possible.
UbiOps serves both groups. Whether you want to run 1 or 1000 jobs in parallel, the auto scaling feature of UbiOps makes sure that you can scale whenever you need, and only pay for what you use.