Deployment, serving and monitoring of your ML models made easy with Arize and UbiOps

UbiOps and Arize

UbiOps is the easy-to-use serving and hosting layer for data science code. UbiOps stands out for its ease of use, freedom to write any code you want while eliminating the need for in-depth IT knowledge. It is a serving, hosting and management layer on top of your preferred infrastructure. Accessible via the UI, client library, or CLI, it’s suitable for every type of data scientist. 

UbiOps is specifically useful for real-time applications that require both simple processing scripts or complex ML models. Thanks to the scalable infrastructure every piece of code can be scaled up and down according to your specifications. 

What is Arize AI?

Arize AI is a Machine Learning Observabililty platform that helps ML practitioners successfully take models from research to production, with ease. Arize’s automated model monitoring and analytics platform help ML teams quickly detect issues the moment they emerge, troubleshoot why they happened, and improve overall model performance. By connecting offline training and validation datasets to online production data in a central inference store, ML teams are able to streamline model validationdrift detectiondata quality checks, and model performance management.

Arize AI acts as the guardrail on deployed AI, providing transparency and introspection into historically black box systems to ensure more effective and responsible AI. To learn more about Arize or machine learning observability and monitoring, visit our blog and resource hub.

Why this integration?

The more business-critical a model is, the more important observability is to keep a pulse on its health and to quickly resolve any issues that arise. 

While deployment of a production-worthy AI model poses a challenge to many, observability is another, deeper challenge that awaits a model in production. With this integration, data scientists and ML engineers can work together to develop a model, push it to production, and gain full visibility and control of its performance. 

Teams using Arize and UbiOps together are able to:

  • Validate model quality and performance prior to deploying to production.
  • Accelerate model deployment (time to value) and iterations without high ops overhead.
  • Automatically diagnose issues that emerge in production, with ability to analyze specific cohorts of problematic predictions.
  • Gain deeper visibility into how models are performing with features such as performance heatmaps, and find opportunities to deliver improvements / retraining. 


Figure 1: architecture overview of the integration

1. Integration walkthrough and instructions

To demonstrate how Arize and UbiOps can work together we’ll use a (locally trained) tensorflow model that predicts the miles per gallon usage of a car based on specific attributes such as the amount of cylinders, horsepower, weight and model year. 

We’ll work in a jupyter notebook and make use of the UbiOps client libraries to communicate with the backend to host and serve the code. The full notebook can be found here. The below code snippets show how UbiOps and Arize integrate.

This code block is the deployment.py file that UbiOps uses to deploy models on its platform. When new data is sent in, it goes through the request function in order for the model to make predictions. In this example, we send in both the input feature data and the actual data to this function, making it the perfect place to place our Arize logging code. We simply use Arize’s bulk_log method, passing in features, predictions, actuals, and optional prediction timestamps, and just like that we have our model logged and ready to explore on the Arize platform.

class Deployment:

   def __init__(self, base_directory, context):

       model_file = os.path.join(base_directory, "tensorflow_model.h5")

       self.model = load_model(model_file)

       self.arize = Client(organization_key=os.environ.get('ARIZE_ORGANIZATION_KEY'), api_key=os.environ.get('ARIZE_API_KEY'))

   def request(self, data):

       input_data = pd.read_csv(data)
       actuals = input_data.pop('MPG')
       prediction = self.model.predict(input_data)

       ########### ARIZE CODE HERE ###########

       ids = pd.DataFrame(input_data.index.values).applymap(str)

       # OPTIONAL: Simulate predictions evenly distributed over 30 days by manually specifying prediction time
       current_time = datetime.datetime.now().timestamp()
       earlier_time = (datetime.datetime.now() - datetime.timedelta(days=30)).timestamp()
       optional_prediction_timestamps = np.linspace(earlier_time, current_time, num=len(ids))
       optional_prediction_timestamps = pd.Series(optional_prediction_timestamps.astype(int))
  
       responses = self.arize.bulk_log(
           model_id="arize-ubiops-tutorial",
           model_type=ModelTypes.NUMERIC,
           model_version="v1",
           prediction_ids= ids,
           prediction_labels=pd.DataFrame(prediction),
           prediction_timestamps=optional_prediction_timestamps,
           actual_labels=actuals,
           features=input_data)
       #######################################

       # Writing the prediction to a csv for further use
       print('Writing prediction to csv')
       pd.DataFrame(prediction).to_csv('prediction.csv', header = , index_label= 'index')

       return {
           "prediction": 'prediction.csv',
       }   

2. Example end result visualised in Arize 

Here’s an example of how Arize visualises model performance in production, with data coming in on a daily basis. The platform provides a snapshot of the overall health of a model, surfacing key metrics such as accuracy, false-positive rate, recall, amongst others (see fig. 2). Moreover, the current performance distributions can be compared against training, validation or historical performance baselines (see fig. 3). 

Arize Performance Dashboard


Figure 2: Arize performance dashboard

Arize PSI Monitor


Figure 3: Arize PSI monitor example

3. Wrap up

This example shows how one can simply deploy, in a fully scalable (containerised) environment, a tensorflow model that is directly available for high frequency requests. In this case, anyone that likes to see what the expected MPG is of a car, can receive the results in a matter of seconds. This is ideal for example a webapp providing such a service. What’s more, with Arize’s monitoring functionality you can keep track of the model’s performance, automatically monitor it and conduct pre-launch validations to ensure a successful launch of your project. 

Using the provided integration notebook you can deploy and monitor your own model quickly. The full notebook can be found here.

In case of questions, remarks or suggestions, please don’t hesitate to contact Ubiops via their Slack channel or get in touch with Arize via their Community on Slack

This blog article has also been published on Medium.

Latest news

Turn your AI & ML models into powerful services with UbiOps