There’s so much you can build with UbiOps at your fingertips. Deploy your Python & R code on UbiOps and instantly scale in the cloud. Use it for real-time serverless inference or long-running jobs. You can build single services as well as large workflows. Choose between efficient CPU or accelerated GPU instances.
Powerful model serving and orchestration

Logging
In the logs you can keep track of everything that is happening in your project. The logs are also your primary source of information for debugging if something goes wrong.
Deployments
Deployments run your code in a scalable way as a containerized microservice. Each deployment has a unique API endpoint for receiving requests with data to process.
Pipelines
Pipelines let you create larger workflows by connecting different deployments together. This allows you to build larger, modular applications.
Audit Events
The audit events show all activity in your project. They provide you with a full audit trail of what has changed and when.
Request schedules
Do you have a model or pipeline that needs to run on a fixed schedule? No worries, just configure a request schedule and we’ll make sure it runs on time.
Metrics
Quickly see how your models are doing and keep an eye on data traffic in your project. There are many more metrics on the monitoring page.
Built for data science teams
UbiOps automatically containerizes your code, creates a service with its own API and takes care of handling requests, automatic scaling, monitoring and security.

Turn your AI models into scalable microservices
Deploy your code in no-time with our easy-to-use browser interface, Python / R client or CLI.
- Manage all your models in one place with version control and revisions.
- Don’t worry about Kubernetes, Docker images, uptime, scaling, monitoring and security. Python or R experience is enough.
- Process any type of data: structured data, files, images, text, sensor data, and more.
- UbiOps supports both low-latency requests as well as asynchronous batch jobs. You can also schedule runs for deployments and pipelines.
Auto-scale with access to on-demand CPU and GPU compute
Ready to scale while paying only for what you use
- Deployments scale automatically with the number of API calls.
- Scale-to-zero functionality. Only pay when your deployments are running.
- Choose the compute instances to suit your model. Access to both CPU and accelerated GPU hardware.
- Run in public cloud, hybrid cloud or on-premise


Create and orchestrate workflows
Re-use and combine multiple deployments in a workflow.
- Each deployment in a workflow is an isolated service that scales independently. Improving the efficiency and scalability of your application.
- In workflows you have the option to bypass deployments and merge output from multiple deployments to one.
- Import/export pipelines directly and share them with your colleagues or other users.
- Each workflow gets a unique API.
Keep track of everything in one place
- View metrics on usage and performance
- Check if there are any issues with your deployments
- Set e-mail alerts and notifications
- Get insight into everything that’s going on with extensive logging.
- Use the UbiOps web interface, API, Python / R Client or CLI to automate your workflow