• Documentation
  • Support
  • Go to my Account
Menu
  • Documentation
  • Support
  • Go to my Account
  • Product
  • Solutions

    By industry

    • Public
    • Healthcare
    • Critical Infrastructure
    • Public
    • Healthcare
    • Critical Infrastructure

    By application

    • Generative AI
    • Computer Vision
    • Time Series
    • Generative AI
    • Computer Vision
    • Time Series

    On-demand GPU

    Instantly scale AI and machine learning workloads on GPU on-demand

    Learn more
  • Customers

    Featured customers

    Bayer and UbiOps
    Scaling computer vision workloads across GPUs
    Innovating with AI towards a digitally secure Netherlands
    Process and analyze enormous amount of data (50+ AI and data science apps) with UbiOps
    Personalized medicine with AI for immunotherapy treatment
    Optimization of district heating grids with IoT data
    All customer stories
  • Resources
    • Documentation
    • Video guides
    • Tutorials
    • Technical integrations
    • NVIDIA AI Enterprise
    • Documentation
    • Video guides
    • Tutorials
    • Technical integrations
    • NVIDIA AI Enterprise
    • Blog
    • Whitepapers
    • Webinars, Interviews & Talks
    • Github
    • UbiOps Training
    • Blog
    • Whitepapers
    • Webinars, Interviews & Talks
    • Github
    • UbiOps Training
  • Company
    • About us
    • UbiOps Partners
    • Contact
    • On-demand GPUs
    • About us
    • UbiOps Partners
    • Contact
    • On-demand GPUs
    • UbiOps for Research & Education
    • Slack Community
    • Jobs at UbiOps
    • UbiOps for Research & Education
    • Slack Community
    • Jobs at UbiOps

    Latest news

    Why is Hybrid Cloud Deployment Useful?

    UbiOps Revolutionizes AI Model Inference Using AMD Instinct 

  • Partners
  • Book a demo
  • Login
Contact Us
Try for free

Category: Functionality

Why are companies opting for on-premise instead of public cloud

Functionality Technology

Why are companies opting for on-premise instead of public cloud?

September 17, 2024 / September 17, 2024 by [email protected]

In a linkedin post by Fergal Mcgovern in May, he tries to explain why around 83% of enterprise CIOs plan to place some workloads on-premise instead of on-cloud. Let’s briefly explain what we mean when we say on-cloud and on-premise: On-cloud storage is when data is stored in data centers operated by third parties which […]

Read more »

UbiOps vs MLOps platforms

Functionality Technology

UbiOps vs MLOps platforms

August 29, 2024 / August 29, 2024 by [email protected]

Machine learning operations (MLOps) involve a set of techniques and principles aimed at the design, development, deployment, and maintenance of machine learning models for production use. The purpose of MLOps is to establish a clear set of guidelines to simplify the complex process of bringing a model into production. You can also learn more about […]

Read more »

Tagged

aimlMLOps
Create a chatbot using Llama 3.1, Streamlit and UbiOps

Deploy your model Functionality

Create a chatbot using Llama 3.1, Streamlit and UbiOps

August 23, 2024 / August 23, 2024 by [email protected]

In recent times we’ve seen that open-source LLMs like Mixtral and Llama are starting to rival the performance of some proprietary LLMs. One of the things to consider when working with open-source models though is that they do not come ready to go for every use case out of the box, like the lack of […]

Read more »

What UbiOps delivers more than standard Model Serving Platforms

Functionality Review product

UbiOps vs Standard Model Serving Platforms

July 12, 2024 / February 17, 2025 by [email protected]

What UbiOps delivers more than standard Model Serving Platforms? Model serving is the process of providing access to production-level models for end-users or applications. Meaning that they will be deployed for internal or external usage . In most cases, such as with UbiOps, they will be available via a REST API. This stage is very […]

Read more »

Tagged

aimodelML modelMLOpsmodel serving
What is multi-model routing

Functionality

What is multi-model routing?

June 21, 2024 / June 21, 2024 by [email protected]

Multi-model routing is a process of linking multiple AI models together. The routing can either be done in series or in parallel, meaning that you use a router to send prompts to specific models. Example of a simple multi-model route Multi-modal routing can have various sorts of benefits. It enables you to have smaller and […]

Read more »

Tagged

aideploymlmultimodel
Reducing inference costs for GenAI

Functionality LLM

Reducing inference costs for GenAI

May 28, 2024 / May 28, 2024 by [email protected]

Reducing inference costs for GenAI

Read more »

Creating a front-end for your Mistral RAG

Functionality Technology

Creating a front-end for your Mistral RAG

May 22, 2024 / May 22, 2024 by [email protected]

In a previous article we showed how you can set up a Retrievel Augmented Generation (RAG) framework for the Mistral-7B-v.02 Instruct LLM using the UbiOps WebApp. In this article we’ll go a step further and create a front-end for that set-up using Streamlit, and we’ll be using the UbiOps Python Client Library to set-up the […]

Read more »

Tagged

llmmistralRAG

Functionality LLM

How to optimize inference speed using batching, vLLM, and UbiOps

May 15, 2024 / May 15, 2024 by [email protected]

In this guide, we will show you how to increase data throughput for LLMs using batching, specifically by utilizing the vLLM library. We will explain some of the techniques it leverages and show why they are useful. We will be looking at the PagedAttention algorithm in particular. Our setup will achieve impressive performance results and […]

Read more »

Functionality

How to benchmark and optimize LLM inference performance (for data scientists)

May 3, 2024 / May 3, 2024 by [email protected]

Introduction Optimizing inference is a machine learning (ML) engineer’s task. In a lot of cases, though, it tends to fall into the hands of data scientists. Whether you’re a data scientist deploying models as a hobby or whether you work in a team that lacks engineers, at some point you will probably have to start […]

Read more »

Tagged

benchmarkllm
Fine-tune a model on your own documentation

Functionality Technology

Fine-tune a model on your own documentation

March 28, 2024 / March 28, 2024 by [email protected]

In this article, we will be creating a chatbot which is fine-tuned on custom documentation. We’ll use UbiOps—which is an AI deployment, serving and management platform—to fine-tune and deploy the instruction-tuned Mistral-7B model taken from Hugging Face. We’ll explain some of the methods used to fine-tune models, such as instruction tuning and domain adaptation, but […]

Read more »

Tagged

#AIfinetuneml
What is AI model serving_

Deploy your model Functionality Technology UbiOps

What is model serving?

March 19, 2024 / March 21, 2024 by [email protected]

Model deployment or model serving designates the stage in which a trained model is brought to production and readily usable. A model-serving platform allows you to easily deploy and monitor your models hassle-free. Below is the MLOps dev cycle and how UbiOps can be used within that cycle. How UbiOps fits into the MLOps dev […]

Read more »

Implementing RAG for your LLM (Mistral)

Functionality Technology

Implementing RAG for your LLM (Mistral)

January 30, 2024 / February 20, 2024 by [email protected]

Most of the open-source models available on Huggingface come pre-trained on a large corpus of publicly available data, like WebText. In general, the size of these datasets give large language models (LLMs) an adequate performance for various use cases. For some, more specific, use cases, however, more domain specific knowledge is required for the LLM […]

Read more »

Tagged

llmmistralRAG
UbiOps Compute Platform

Functionality UbiOps

UbiOps Compute Platform

December 28, 2023 / July 3, 2024 by [email protected]

Facilitate a hybrid-cloud strategy and save weeks of work!

Read more »

Tagged

cloudcomputedeploymentvirtualmachine
Falcon LLM fine - tuning

Functionality

Falcon LLM fine-tuning

December 18, 2023 / December 18, 2023 by [email protected]

In the good old days machine learning models were made from scratch by data scientists. This involved acquiring, and cleaning data before training a model and getting it to production. In recent years, though, the size of models has increased, and thus the training data required to train these new larger models as well. This […]

Read more »

Tagged

falconfine-tunellm
How to optimize the inference time of your machine learning model

Functionality

How to optimize the inference time of your machine learning model

August 15, 2023 / August 17, 2023 by [email protected]

Pros and cons of different techniques More and more companies are actively using artificial intelligence (AI) in their business, and, slowly but surely, more models are being brought into production. When making the step towards production, inference time starts to play an important role. When a model is external user facing, you typically want to […]

Read more »

Tagged

aiinferencetimemlmlmodel
Everything you need to know about GPUs

Functionality Technology

Everything you need to know about GPUs -2023 Guide

August 15, 2023 / September 6, 2023 by [email protected]

Introduction  A Graphical Processing Unit (GPU) is a processor that is made up of smaller, more specialized cores. Originally designed to accelerate graphical calculations, GPUs were developed to work in parallel processing, which means that they are able to process data simultaneously in order to complete tasks more quickly. In other words, GPUs are able […]

Read more »

Tagged

GPU

Functionality Technology

Deploy LLaMa 2 with a customizable front-end in under 15 minutes using only UbiOps, Python and Streamlit: in 2024

August 4, 2023 / January 22, 2024 by [email protected]

What can you get out of this guide? In this guide, we explain how to deploy LLaMa 2, an open-source Large Language Model (LLM), using UbiOps for easy model hosting and Streamlit for creating a chatbot UI. The guide provides step-by-step instructions for packaging a deployment, loading it into UbiOps, configuring compute on GPUs and […]

Read more »

Tagged

LLaMa2MLOpsPythonstreamlitUbiOps guide
HPC UbiOps

Functionality Technology

How High-Performance Computing HPC Boosts Artificial Intelligence

July 28, 2023 / August 17, 2023 by [email protected]

Artificial Intelligence (AI) has emerged as a world-changing technology with a wide range of applications across industries. From virtual assistants to autonomous vehicles and advanced data analytics, AI has started to revolutionize the way we live and work. However, most AI algorithms require a very large amount of computational power to process and analyze all the necessary data. This is where […]

Read more »

Tagged

aicloudcomputingHPCmlMLOps
UbiOps training

Events Functionality

UbiOps releases AI model training functionality

July 28, 2023 / August 17, 2023 by [email protected]

UbiOps, leading platform for deploying and scaling Artificial Intelligence (AI) and Machine Learning (ML) models, is proud to introduce advanced functionality for training AI models in the cloud. This development allows businesses to manage even more of their AI development lifecycle on the UbiOps platform and also leverage Generative AI faster. Training and fine-tuning AI […]

Read more »

Tagged

aiMLOpstraining
UbiOps __ Foundation models (1)

Functionality Technology

Benefit From Pre-Trained Open Source Foundation Models

June 25, 2023 / July 26, 2023 by [email protected]

One of the big reasons for the increased usage of AI on the web is the availability of open source foundation models. Increasingly, Artificial Intelligence (AI) lies at the heart of online tools and applications. For example, the global chatbot market is expected to reach $1 billion dollars by 2024, because they can save companies […]

Read more »

Tagged

aifoundation modelpre-trained
UbiOps __ FinTech Artice1

Functionality Technology Whitepapers

How To Run a Stock Market Index Prediction Model for the S&P 500 index On UbiOps

June 25, 2023 / July 26, 2023 by [email protected]

The IT spending of financial institutions all over the world is steadily increasing, and is expected to reach over $750 billion dollars by 2025. This is partly because of the significant increase in the development and deployment of AI systems. AI-powered systems can process large volumes of data very quickly and at a large scale. […]

Read more »

Ridder __ UbiOps (1) Use Case story

Collaborations Functionality Technology

Pioneering Agritech Innovation runs on UbiOps: Ridder

June 12, 2023 / January 13, 2025 by [email protected]

Agritech & AI working together Business owners in the horticultural sector and agritech can use automated image recognition to automate their crop observations and thereby optimize harvest and work scheduling. This pioneering technology is being developed by Ridder and is in part made possible thanks to UbiOps’ computation power. The sector has the opportunity to […]

Read more »

Tagged

AgritecaiCropimage processingmlRidder
Training ML on UbiOps

Blog Functionality Product update

Training ML models on UbiOps

June 1, 2023 / July 26, 2023 by [email protected]

Training Machine Learning models in the cloud from scratch can be a challenging task. In this post we will dive into why UbiOps is not only useful for running and scaling model inference, but can also be used to run training jobs for Machine Learning models. UbiOps has a built-in functionality for managing and running […]

Read more »

Tagged

aidatamlMLOpsmltrainingmodeltrainingtrainml
2.24.0 release UbiOps

Functionality Product update

New UbiOps features June 2023

June 1, 2023 / July 31, 2023 by [email protected]

UbiOps release news – version 2.24.0 On the 1st of June 2023 we released new functionality and made improvements to our UbiOps SaaS platform. On this page we will walk you through the changes with some examples of how to use the new functionality Python client library version for this release: 3.15.0CLI version for this release: 2.15.0 […]

Read more »

Tagged

aimlMLOpsproductreleasetraining
Page navigation
  • Current Page 1
  • Page 2
  • Page 3
Sidebar

Latest news

  • February 5, 2025
  • October 10, 2024

Follow us

Linkedin Youtube Github Medium

Get updates and news from UbiOps

Newsletter

Contact

Headquarters The Hague

Wilhelmina van Pruisenweg 104
2595 AN, The Hague
The Netherlands
+31 70 792 00 91

Amsterdam Office

LAB42, room L2.16, Science Park 900, 1098 XH Amsterdam, the Netherlands

Company

  • Documentation
  • Support
  • Contact Us
  • Go to my Account
  • Documentation
  • Support
  • Contact Us
  • Go to my Account

Follow us

Linkedin Youtube Github Medium

UbiOps is a trademark of
Dutch Analytics B.V. Reg. 66849381

Knowledge Base

  • Tutorials
  • Video Guides
  • Blogs and News
  • Book a Demo
  • Tutorials
  • Video Guides
  • Blogs and News
  • Book a Demo
  • UbiOps Terms and Conditions
  • Privacy Policy
  • Cookie declaration
  • UbiOps Terms and Conditions
  • Privacy Policy
  • Cookie declaration