Make your organization smarter with add-on analytics.
Running data science and ML code as web services behind APIs can help greatly for the adoption and integration of analytics inside your organization.
Using APIs as a clean interface between the analytics and the applications that use them allows for faster product development and re-usability of developed models in multiple applications.
In this article
At UbiOps, we often speak with companies who are starting out on their journey in Artificial Intelligence (AI) and Machine Learning (ML). They want to make use of data science and AI in their organization, but eventually, struggle to incorporate developed algorithms into their applications. We see that progressing algorithms from the lab to the field often proves to be the hardest part.
Making new and existing applications smarter with Machine Learning and Artificial Intelligence is the future. But how to run your algorithms in a way that separates your data science code from your core IT infrastructure? And how to make algorithms easily accessible from all your applications at the same time?
Especially organizations who onboarded a fresh data science team ask themselves this question. If you don’t have the software and IT capabilities to design, build and maintain a serving infrastructure you might think it requires refactoring of existing infrastructure. Or that you have to integrate software that is not part of your core business, introducing piles of technical debt and the need for specialized developers.
But there is hope! One increasingly popular solution for this challenge is to run your algorithms live behind web APIs.
Let’s discover the benefits of ML APIs and how they can work for you.
What is an API and how does it work?
Application Programming Interfaces (or APIs for short) have been around for a long time and basically provide a standardized way of communication between two software applications that are not necessarily of the same type.
Applied to our context of data science, an API allows for the communication between a web page or app and your AI application. The API opens up certain user-defined URL endpoints, which can be used to send or receive requests with data. These endpoints are not dependent on the application: if you update your algorithm, the interface will stay the same. This minimizes the work required to update the running application.
Over the years, making use of the universal language between applications and services to communicate securely over the internet, has allowed many developers to build software solutions fast (and furiously).
Figure 1 shows how this works.
Eight benefits of exposing AI/ML models as an API
We have seen the advantages of using APIs to communicate between services. But what are the added benefits of exposing and using your data science, AI & ML models through an API?
- The main advantage is that it gives you a very clean and well-defined interface to your analytics, which integrates easily with any application: a simple cURL command is all you need!
- It provides stability: The algorithm or input data can change, but the API endpoint will remain the same.
- It checks your data and requests at the door. Because the request that the algorithm expects is so well defined, anything not corresponding to the specification will result in an error.
- It separates the iterative world of data science from the world of IT and software. Algorithms need frequent updates, while the software application from where they run needs to be stable, reliable and robust. This separation also means that the Data Scientists can focus on building models and doesn’t have to worry about the infrastructure.
- APIs open up data science models to the whole organization, or even customers or third parties. All in a secure and scalable way.
- Reusability. It allows your model to be used by multiple applications at the same time, from any language or framework. No additional software or settings are required, so once the deployment is running, it can be queried immediately.
- In addition, as it’s a single API endpoint, there’s no need to configure a load-balancer or bypass a firewall. This is what some major cloud providers require, and it adds complexity to the infrastructure.
- Security is handled by only allowing requests to the model if they are signed with a token that holds the right permissions. This way it’s easy to only allow requests from inside your team, or open up the model to the rest of the world.
Fig 2. – From data science code to scalable microservices with their own API endpoint.
Manage your analytics APIs with UbiOps
Now it’s clear why exposing your model’s code as an API makes it flexible and easy to integrate.
When you’re just starting out with deploying models, it can be tempting to expose an API endpoint from inside your model’s code, but it might not be the smartest move to do this.
Keeping your infrastructure and model’s code separate, helps to make the application stable and reliable. Maintaining API endpoints will be harder when they’re baked inside the model’s code. Furthermore, an update of your model will subsequently lead to down-time of the endpoint, resulting in errors when making requests to it. Not something you want for models that run in production.
Having your models deployed in UbiOps, means your deployment’s endpoints are maintained and serviced by us. Endpoints are always in a standardized format and do not change when new code is uploaded. Deployments can have multiple versions running, that all serve different code and each with their own unique endpoint, making it suitable for A/B testing and other comparison techniques. When new code is uploaded for a version, the endpoint will direct requests to the old model, until the new model is up and running, without any downtime.
UbiOps allows complete control over the number of replicas and memory of your deployments, with changes on those parameters taking effect immediately.
Fig. 3 – UbiOps makes serving data science and AI code to all your business applications easy.
Building the future: Machine Learning as a service
Using APIs as a clean interface between the analytics and the application that makes use of them allows for faster product development and reusability of developed models in multiple applications.
An example can be a webshop that wants to make use of a recommendation engine. Instead of running this algorithm as part of the website, it runs as a separate service that the website front-end calls when needed.
Or look at this low-code application where an age estimation Neural Network is integrated into, running as a separate external service.
In the future, we will see more democratization of AI/ML models that run as a service behind an endpoint with the freedom and flexibility to be called by multiple applications inside and outside of your organization. This lowers the effort to start integrating ML into your applications, saving you time, effort and headaches.
UbiOps is an easy-to-use deployment and serving layer for your data science code. It turns your Python & R models and scripts into web services, allowing you to use them from anywhere at any time. So you can embed them in your own applications, website or data infrastructure. Without having to worry about security, reliability or scalability. UbiOps takes care of this for you.
UbiOps is built to be as flexible and adaptive as you need it to be for running your code, without losing on performance or security. We’ve designed it to fit like a building block on top of your existing stack, rather than having to make an effort to integrate it with your product. It lets you manage your models from one centralized location, offering flexibility and governance.
You can find more technical information in our documentation -> ubiops.com/docs
To help you getting up to speed with UbiOps, we have prepared some examples and quickstart tutorials -> ubiops.com/tutorials