• Documentation
  • Support
  • Go to my Account
Menu
  • Documentation
  • Support
  • Go to my Account
  • Product
  • Solutions

    By industry

    • Public
    • Healthcare
    • Critical Infrastructure
    • Public
    • Healthcare
    • Critical Infrastructure

    By application

    • Generative AI
    • Computer Vision
    • Time Series
    • Generative AI
    • Computer Vision
    • Time Series

    On-demand GPU

    Instantly scale AI and machine learning workloads on GPU on-demand

    Learn more
  • Customers

    Featured customers

    Bayer and UbiOps
    Scaling computer vision workloads across GPUs
    Innovating with AI towards a digitally secure Netherlands
    Process and analyze enormous amount of data (50+ AI and data science apps) with UbiOps
    Personalized medicine with AI for immunotherapy treatment
    Optimization of district heating grids with IoT data
    All customer stories
  • Resources
    • Documentation
    • Video guides
    • Tutorials
    • Technical integrations
    • NVIDIA AI Enterprise
    • Documentation
    • Video guides
    • Tutorials
    • Technical integrations
    • NVIDIA AI Enterprise
    • Blog
    • Whitepapers
    • Webinars, Interviews & Talks
    • Github
    • UbiOps Training
    • Blog
    • Whitepapers
    • Webinars, Interviews & Talks
    • Github
    • UbiOps Training
  • Company
    • About us
    • UbiOps Partners
    • Contact
    • On-demand GPUs
    • About us
    • UbiOps Partners
    • Contact
    • On-demand GPUs
    • UbiOps for Research & Education
    • Slack Community
    • Jobs at UbiOps
    • UbiOps for Research & Education
    • Slack Community
    • Jobs at UbiOps

    Latest news

    Why is Hybrid Cloud Deployment Useful?

    UbiOps Revolutionizes AI Model Inference Using AMD Instinct 

  • Partners
  • Book a demo
  • Login
Contact Us
Try for free

Tag: mlmodel

How to optimize the inference time of your machine learning model

Functionality

How to optimize the inference time of your machine learning model

August 15, 2023 / August 17, 2023 by [email protected]

Pros and cons of different techniques More and more companies are actively using artificial intelligence (AI) in their business, and, slowly but surely, more models are being brought into production. When making the step towards production, inference time starts to play an important role. When a model is external user facing, you typically want to […]

Read more »

Tagged

aiinferencetimemlmlmodel
Using the new file system for training a tensorflow model

Functionality Technology UbiOps

Using the new file system for training a tensorflow model

February 20, 2023 / July 26, 2023 by [email protected]

A new file system in UbiOps We recently released a new file system at UbiOps, which makes it a lot easier to work with files on the UbiOps platform. To show you how the new system works I will walk you through an example. In the last release we changed our file system from working […]

Read more »

Tagged

#datascienceaifilesystemmlmlmodeltrainingubiops

Blog Collaborations Functionality

Deploy your ML model with UbiOps and monitor it with WhyLabs

January 11, 2022 / January 3, 2024 by [email protected]

We are excited to share we’ve partnered with WhyLabs to enable easy #machinelearning model monitoring 🚀 ⌚️ To showcase the integration, we will train a model to predict the price of a used car based on a number of factors (including horsepower, year, and mileage), deploy it with UbiOps and then monitor it “in production” with WhyLabs. We use […]

Read more »

Tagged

#AIjupyterkagglemlmlmodelmonitoringnotebookPython
Serving, hosting and monitoring of xgboost model: UbiOps and Arthur

Blog Collaborations

Serving, hosting and monitoring of an xgboost model: UbiOps and Arthur

May 25, 2021 / January 5, 2024 by [email protected]

Making ML Ops observable and explainable: serving, hosting and monitoring combined.  Maximizing performance and minimizing risk are at the heart of model monitoring. In mission-critical AI models, real-time visibility is crucial. Respond quickly by retraining your model, replacing the current version or tweaking the pipelines. Additionally, setting up your serving and hosting infrastructure in a […]

Read more »

Tagged

#AIintegrationmlmodelMLOpsmonitoring
Sidebar

Latest news

  • February 5, 2025
  • October 10, 2024

Follow us

Linkedin Youtube Github Medium

Get updates and news from UbiOps

Newsletter

Contact

Headquarters The Hague

Wilhelmina van Pruisenweg 104
2595 AN, The Hague
The Netherlands
+31 70 792 00 91

Amsterdam Office

LAB42, room L2.16, Science Park 900, 1098 XH Amsterdam, the Netherlands

Company

  • Documentation
  • Support
  • Contact Us
  • Go to my Account
  • Documentation
  • Support
  • Contact Us
  • Go to my Account

Follow us

Linkedin Youtube Github Medium

UbiOps is a trademark of
Dutch Analytics B.V. Reg. 66849381

Knowledge Base

  • Tutorials
  • Video Guides
  • Blogs and News
  • Book a Demo
  • Tutorials
  • Video Guides
  • Blogs and News
  • Book a Demo
  • UbiOps Terms and Conditions
  • Privacy Policy
  • Cookie declaration
  • UbiOps Terms and Conditions
  • Privacy Policy
  • Cookie declaration