On the 11th of July 2024 we have released new functionality and made improvements to our UbiOps SaaS product. An overview of the changes is given below.
Python client library version for this release: 4.5.0
CLI version for this release: 2.22.0
More information about active instances
To provide more information on what’s happening in the background with your deployments when they’re scaling up and down, we added the option to inspect all currently active instances.
To see all the currently active instances in the entire project, navigate to Monitoring > Active instances. You can also view the active instances of a specific deployment version by navigating to Deployments > your deployment > your deployment version > active instances. You can also get this information directly from the API or with the client library. Below you can find an example code snippet for retrieving this information from the client library
import ubiops
PROJECT_NAME = 'example-project'
API_TOKEN = 'token 123'
configuration = ubiops.Configuration()
configuration.api_key['Authorization'] = API_TOKEN
client = ubiops.ApiClient(configuration)
api = ubiops.CoreApi(client)
# Retrieving active instances on project level
active_instances = api.project_instances_list(project_name = PROJECT_NAME)
# Retrieving active instances for a specific deployment version
active_instances_depl = api.instances_list(project_name = PROJECT_NAME, deployment_name = 'example', version='v1')
Instance type groups
Can your deployment run on different kinds of instance types? You can now define this behavior by creating instance type groups. Instance type groups specify what kinds of instance types a deployment or experiment can run on.
This can be especially helpful when working in hybrid- or multi-cloud set-ups. For example, let’s say you have some on-premise GPUs coupled to your UbiOps project.
You’ll probably want to run your workloads on your on-premise GPUs first, but scale out to our SaaS compute if they’re all occupied. In an instance type group you are able to define exactly that, by assigning your on-premise GPU instance type a higher priority than the SaaS instance types.
Interested in coupling your own compute to UbiOps?
Do you own some on-premise or cloud compute that you’d like to couple to your UbiOps account?
Contact us to inquire about the possibilities!
Default metrics for training runs
We now track real-time default metrics for training runs. We track GPU utilization, CPU utilization, and memory utilization. You can find these metrics in the WebApp on the metrics tab of your training run. You can also fetch them from the API or with the client library by using the label deployment_request_id:<id>
.
Please note that it was already possible to track custom metrics for training runs by using the UbiOps MetricClient.
Miscellaneous
We also made some miscellaneous changes and improvements:
We added the option to create multi-source connections via drag and drop in the pipeline editor.
We improved the way workloads are orchestrated and instances are scaled in the background. This entails faster cold starts and better resource availability.
We added descriptions for exports so you can easily check what you actually created the export for.
Quick copy buttons where added in the WebApp for copying the input/output of a request, request id, and experiment id.
We added an option to include all environment variables in an export without the need to specify exactly which ones.
We added an option to download the raw usage data directly from the WebApp for the organization and project usage charts. You can find these on the organization subscription page and the project settings page respectively.
Blob support has been dropped. Blobs were already deprecated in release 2.21.0
Native R support has been deprecated. R can still be used by installing it manually with the ubiops.yaml