This guide will explain how you can deploy a Stable Diffusion model in under 15 minutes
The number of parameters of Machine Learning (ML) models has been growing over the last few years, with some models (like GPT-4) reaching over one trillion parameters. With this increase in size, the development of these models requires more compute resources than before, one of the problems to tackle being the vast amount of training data needed to train such a large model.. This increase in size however also results in the fact that AI can be used more and more in day-to-day operations for companies.
Deploying Foundation models
Companies are realizing that using AI models for things like chatbots to save them time and money. Getting access to (enough) training data and compute resources to develop these models can be hard, but luckily there is a solution. On platforms like Hugginface you can get access to massive pre-trained ML models, which can then be adapted to your own use-case, using smaller, task specific datasets. These pre-trained are trained on massive data sets, and are also referred to as foundation models.
Foundation models can differ in size, availability (open-source or proprietary), and type. The most popular types being Computer Vision (CV) models and Large Language Models (LLMs). It can be a challenge to deploy these models to production, since you’ll have to overcome challenges like:
- Managing the scaling of the model
- Managing private data (with is why some organizations try to deter from using cloud providers)
- Getting access to the right hardware
- Reliability (i.e., uptime)
Deploy your ML models within minutes
UbiOps is a platform where you can easily host and deploy your models. Taking care of auto-scaling, providing both on-premise and cloud solutions, and having an uptime of 99,99%. Further then that UbiOps also makes sure you’re able to run your model on state-of-the-art hardware, and exposes your model with an API endpoint.
There is already a guide available that shows you how you can easily deploy an LLM (LLaMa 2) with a customizable front-end, but how would this work for a CV model? Let’s take a look at what we need to deploy a Computer Vision (CV) model on UbiOps.
Stable diffusion
Stable Diffusion is a model that is able to generate photo-realistic images based on any text input. The model was developed by the Ludwig Maximilian University of Munich and the CompVis group, and was released in 2022. For this guide we’ll use the pre-trained Stable Diffusion v1-5 model from Huggingface.
Deploying the model
Deploying a model is made easy with UbiOps, you’ll only need to do four things in order to give your model access to the most powerful hardware, and make it accessible to others by exposing it via an API endpoint.
In order for us to deploy the Stable Diffusion model we need to create a deployment. When you upload a code to UbiOps, a container is made that will run as a microservice inside UbiOps. For a deployment we’ll need to define an input & output, we’ll show you how later in this guide. Each deployment has one or more versions, for each version the coding environment, instance types, and even the deployed code can all be different.
The environment the deployed code runs in can be managed separately. UbiOps provides several base environments that we can supplement with additional dependencies. Environments can be reused for different deployment versions (and training experiments). Reusing environments can greatly reduce the build time of deployments.
Important steps
So in short, we need to do the following to deploy the Stable Diffusion model from Huggingface:
- Create a UbiOps account
- Create a code environment for our deployment (with UbiOps you can handle your code and the environment it runs in separately)
- Create a deployment for the Stable Diffusion model
- Make an API call to the Stable Diffusion model that we created in the previous step
In order to conduct these steps we need the following requirements:
- Python 3.10 (or higher)
- A UbiOps account
- The deployment package, this package contains the code to download the model from Huggingface and the files to create the coding environment
Create a project & deployment
UbiOps works with organization and projects. An organization can have multiple projects, and each projects can have multiple deployment and/or pipelines (chain of deployments, read more about pipelines here)
After creating your account you can go over to the WebApp and create your first project. You can do this by clicking on the “+ Create new project” button and filling in a name. You can choose whatever name you want.
Then we can start creating our deployment. On the left hand side you can see the “Deployments” button. Click on this button, and then on the “+Create” button on the top right. Now we’re prompted to fill in some fields, like the name, and input & output for the deployment. You can choose whatever name you want, same as with the project.
As mentioned earlier, the Stable Diffusion model that we’ll be deploying converts text to image. Therefore, we need to define the input of the deployment (the prompt of the user) as a string (text), and the output of the deployment (the response of the model) as a file (an image):
Deployment input & output | Name | Data type |
Input fields | prompt | string |
Output fields | image | file |
After filling out the input & output of the deployment, UbiOps generates a deployment template. This template shows you how the deployment code should look in order for it to work. For this guide we already have the deployment code so we don’t have to use the template, but definitely have a look at it when you want to deploy your own code some day.
Create a version
Now we need to create a version. You can do this by scrolling all the way down and clicking on the “Next: Create a version button”. This is where we’ll upload this deployment package, containing our deployment code, and the environment files containing additional dependencies. The base environment for this deployment version will be Python 3.10.
You can leave the instance type as is or select one with more resources, do keep in mind that this will cost extra. For this guide we’ll leave the version name on default (v1), and select the most powerful CPU instance type available: “16384 MB + 4 vCPU”. This instance type will have the fastest inference time for a deployment that doesn’t uses a GPU, but as said before is more expensive to run.
After filling out the deployment version creation form, we can scroll down again and click on the “Create” button. You can leave the Optional / Advanced settings at default. UbiOps will now create a container around our code, build the coding environment, and deploy the model.
You can monitor the progress of the building process by checking the logs, which can be accessed by clicking the “Logs” button next to the deployment name or by clicking on the “Logging” button on the left hand side.
Note that building a deployment can take a while, because UbiOps needs to build the coding environment first.
Create a request
When the deployment is available we can go ahead and make our first request to it. You can do so by clicking on the “Create request” button, and filling in something like “A man on a bicycle in Amsterdam”. UbiOps will then load the model from Huggingface, and start processing the input.
The first request to a deployment always takes longer than a subsequent request, because the model has already been loaded in when making subsequent requests. When no request is made to the deployment for some time the instance type will scale down again. If you make a new request to the deployment after it has scaled down it needs to load in the model again. You can set the minimum instances to 1 if you want your model to be active all the time.
The prompt “A pug in an astronaut suit with a pumpkin” generated the following image:
And there you have it!
In four steps we have just deployed a Stable Diffusion model from Huggingface to a production ready environment, using UbiOps.
Deploy your model with UbiOps
If you’re curious about what other types of use cases and you can use UbiOps for, like LLMs for example, have a look at our Tutorials page. Where you can find how you can deploy a number of different types of ML models, and how you can utilize other powerful features of UbiOps, like pipelines. Alternatively, you can book a free demo with us to see how UbiOps can assist you and your company.