Building a low-code app powered by AI (with Mendix and UbiOps)

Low-code platforms such as Mendix are a great way to develop web & mobile applications in a fraction of the time that developing an application can normally take. With the help of such a platform, you don’t need to code your app line-by-line, but instead, construct it using pre-built elements and components. This makes building your own app way faster, more intuitive and easier to debug.

Despite the growing interest in data science and machine learning, most low-code platforms do not include functionality for AI and rely on integrations with other tools. What if you have a model available or are able to build one, and you want to turn it into an end-to-end application for your client? Such as an image recognition application, a chatbot or a recommendation system.

In this article we will show you how you can do this via the example of an age estimation application. With this app you can upload a picture of yourself, a friend, or some random person from the internet and the app will estimate their age based on the person’s face. Pretty cool, right?

To do so we use a pre-trained (open source) neural network, and two tools: Mendix and UbiOps. Both Mendix and UbiOps can be used for free, so you can try it out yourself as well.

Let’s walk through it step-by-step.

 

Components & Overview

Image recognition model and training data
For the image classification itself, we will be using a pre-trained neural network implemented with the ONNX deep learning library. The age estimation model was developed at the ETH Zurich and trained on a publicly available image dataset scraped from IMDB and Wikipedia, containing faces of celebrities and public figures, as well as their age.

For more details and information about the model you can visit: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/

Reference: Rasmus Rothe and Radu Timofte and Luc Van Gool, Deep expectation of real and apparent age from a single image without facial landmarks, International Journal of Computer Vision, vol. 126, 2018

 

Serving infrastructure and front-end 

To build the application logic and front-end, we use the low-code platform Mendix.

To deploy and run the ONNX neural network and use it as a service with an API endpoint we make use of UbiOps. The serving endpoint of the model in UbiOps can be consumed by the Mendix app to make requests (see figure 1).

 

Why this setup? 

It doesn’t require any knowledge of how to set up the underlying IT architecture. We don’t have to worry about setting up servers, deploying our application, configuring networking, user management, scalability and uptime. Mendix and UbiOps are both SaaS services that take away the difficult work. Allowing us to create this app in no time!

 

The figure below shows how everything comes together:

Figure 2: high level architecture

 

Requirements

In order to get started you need the following:

    • Mendix Studio Pro installed (only available on Windows, or via Parallel desktop or similar on mac).

    • The deployment package that contain the code we will run on UbiOps and the code dependencies.

    • The two model files, please put these files in the deployment package:

->  Onnx model: https://storage.googleapis.com/ubiops/data/Integration%20with%20other%20tools/mendix-age-estimation/mendix-model-files/version-RFB-320.onnx

-> Face recognition model: https://storage.googleapis.com/ubiops/data/Integration%20with%20other%20tools/mendix-age-estimation/mendix-model-files/vgg_ilsvrc_16_age_imdb_wiki.onnx

    • Basic knowledge of Python and a basic understanding of REST APIs.

 

Getting to work: Deploying the age estimation model 

First we will look at deploying the deep learning model on UbiOps so we can make requests to it.

After we upload our code & pre-trained models, UbiOps creates a Docker image with that code and all the necessary packages and dependencies included. After building, it’s available as a live service with a REST API endpoint that we can call from Mendix. This basically enables us to run any type of data processing code behind an endpoint and use it from wherever we want.

To deploy the pretrained ONNX model on UbiOps, we need to write some Python code first. UbiOps requires a deployment.py file with a request function in it. This is the function UbiOps will call every time a request is made through the serving endpoint.

We will use the `deployment.py` template available on the UbiOps Github and make a few edits to adjust it for our purpose.

Here is the final code of the `deployment.py`:

import cv2


import onnxruntime as ort

import argparse

import numpy as np

import sys

import os

from box_utils import predict

from PIL import Image

import base64

import io

def preprocess_image(img_base64):


   """

  The image will be resized using OpenCV to a resolution of 224x224 pixels.

  """

   img = Image.open(io.BytesIO(base64.b64decode(str(img_base64))))

   img_arr = np.asarray(img)

   return img_arr

def scale(box):

   width = box[2] - box[0]

   height = box[3] - box[1]

   maximum = max(width, height)

   dx = int((maximum - width)/2)

   dy = int((maximum - height)/2)

   bboxes = [box[0] - dx, box[1] - dy, box[2] + dx, box[3] + dy]

   return bboxes

# crop image

def cropImage(image, box):

   num = image[box[1]:box[3], box[0]:box[2]]

   return num

# face detection method

def faceDetector(orig_image, face_detector, threshold = 0.7):

   image = cv2.cvtColor(orig_image, cv2.COLOR_BGR2RGB)

   image = cv2.resize(image, (320, 240))

   image_mean = np.array([127, 127, 127])

   image = (image - image_mean) / 128

   image = np.transpose(image, [2, 0, 1])

   image = np.expand_dims(image, axis=0)

   image = image.astype(np.float32)

   input_name = face_detector.get_inputs()[0].name

   confidences, boxes = face_detector.run(None, {input_name: image})

   boxes, labels, probs = predict(orig_image.shape[1], orig_image.shape[0], confidences, boxes, threshold)

   return boxes, labels, probs

def ageClassifier(orig_image, age_classifier):

   image = cv2.cvtColor(orig_image, cv2.COLOR_BGR2RGB)

   image = cv2.resize(image, (224, 224))

   image = np.transpose(image, [2, 0, 1])

   image = np.expand_dims(image, axis=0)

   image = image.astype(np.float32)

   input_name = age_classifier.get_inputs()[0].name

   ages = age_classifier.run(None, {input_name: image})

   age = round(sum(ages[0][0] * list(range(0, 101))), 1)

   return age


class Deployment:

   def __init__(self, base_directory):

       """

      Initialisation method for the deployment. This will be called at start-up of the model in UbiOps.

       :param str base_directory: absolute path to the directory where this file is located.


       """

       onnx_model = os.path.join(base_directory, "vgg_ilsvrc_16_age_imdb_wiki.onnx")

       face_detector_onnx = os.path.join(base_directory, "version-RFB-320.onnx")

       self.age_classifier = ort.InferenceSession(onnx_model)

       self.face_detector = ort.InferenceSession(face_detector_onnx)


   def request(self, data):

       original_image = preprocess_image(data['photo'])

       boxes, labels, probs = faceDetector(original_image, self.face_detector)

       ages = []

       for i in range(boxes.shape[0]):

           box = scale(boxes[i, :])

           cropped = cropImage(original_image, box)

           # gender = genderClassifier(cropped)

           ages.append(ageClassifier(cropped, self.age_classifier))


       # Here we return an integer with the estimated age

       return {'age': int(ages[0])}


Some notes on the code:

    • The input of the request function is a string. Later, we will send the image from Mendix as a base64 encoded string to UbiOps. This is because Mendix can not send files from their REST request module. 

    • The output of the request function is an integer for the estimated age.

    • We have added one function (base64_to_image) for transforming the image to a Numpy array and placed it outside of the Deployment class.

       

Within UbiOps, the zipped folder “deployment package” is used to upload our code to the platform so UbiOps can deploy everything. The structure of the zip is as follows (note that there is a parent folder inside the zip):

Figure 3: Contents of the deployment_package.zip file

    • The deployment.py file (see code above) contains the request function that does the actual data handling and model inference in Python. 

    • A requirements.txt file that lists the required Python packages

# This file contains package requirements for the model

# installed via PIP. Installed before model initialization

opencv-python==4.5.1.48

imageio==2.5.0

onnxruntime

    • The downloaded ONNX model files (.onnx files) that we refer to in the deployment.py file.

    • A `ubiops.yaml` file. This is used to tell UbiOps what OS level packages need to be installed in the Docker. We need this to install ONNX and its dependencies.

apt:

 packages:

   - ffmpeg

   - libsm6

   - libxext6

    • The setup_logging.py file is not mandatory, but used to integrate logs from the code with UbiOps.

Now we log in to UbiOps to deploy our model. With the finished deployment_package, we can create the first version of the deployment via the UbiOps UI (can also be done via the CLI or client library). 

First, we go to ‘Deployments’ in the menu on the left. There we click ‘Create’ and tell UbiOps a few things:

    • We give our model a nice original name like ‘mendix-age-estimation-app’

The Input and output type and fields that our model expects. You can see in the request() function in the ‘deployment.py’ that the variables are the following:
As ‘Input’ we define a variable ‘photo’ with as type ‘string’
As Output we define a variable ‘age’ with type ‘integer’

Figure 4: Deployment creation step in UbiOps

 

Figure 5: Deployment creation step in UbiOps (bottom of page)

 

We click “Next step” and there define the following:

    • Set the Python language to Python 3.8.

    • Click ‘Upload code’ and there select the zip file from our laptop to upload.

    • The rest we can leave as it is.

There are some useful advanced settings you can play with, but we don’t need them for now:

Figure 6: Uploading our code (deployment package)  in UbiOps

 

Now we click “Create“ and UbiOps automatically starts building and deploying the model. Note that it might take a while for the deployment package to upload. It is around 500MB which can take some time depending on your internet connection. 

After upload, the status of the version changes to ‘Building’. We can actually follow what is happening in the background if we click on the version name and on the next page click the logo icon next to the status. You can see the building logs like this:

.Figure 7: The logs in UbiOps from the building of the container

 

After a few minutes of building (installing ONNX takes a while), our age estimation model is ready to be used! You can test it if you want by clicking on the version and clicking ‘Create direct request’. Note that the model expects a base64 string of an image (so you need to convert the image first, this will be done by Mendix in the background).

Figure 8: The model is deployed and available for requests

 

Our ONNX model is now running live and we can send data to it, great!

As a last thing, we need to create a service user with an API token in UbiOps so the Mendix app can authenticate with the UbiOps endpoint. You can do this in the Users & Permissions tab on the left. Make sure to copy the token and save it for later. To do so, assign the role of “project-admin” to the user of UbiOps. 

Figure 9: Add a token and save it for later

 

Now we will switch to Mendix to create the front end and the connection to UbiOps. 

 

Building the Mendix front-end

To develop the front-end of our app we use Mendix Studio (webApp) and Mendix Studio Pro (desktop version). 

We will create a simple app which has three pages:

    • a home page with a button to upload an image.

    • a page to upload the image (pop up).

    • a page to display the estimated age of the person in the picture.

Note: Later, we also added a fourth page that provides the user with more info and links to this article. The first three pages are connected via two microflows because of the underlying logic (more on that later). The latter page is stand-alone, and is activated by clicking on the “?” on the second page.

 

Step 1: Defining the Domain Model

As a first step, we create a “domain model” entity. This is basically the information/data model of the app. Our domain model is called “Photo” and has a ‘System.Image’ property so Mendix knows it includes an image, attached to this are two attributes called age and photo. In these attributes, we will later save the uploaded photo as a string, and the age as a response from ubiops as an integer. 

Domain model in Mendix

Figure 10: Domain model in Mendix

 

Step 2: Creating the home page, the upload page and the microflow.

First we create a “form” page using one of the pre-built templates in Mendix. After some adjustments and adding a button that said “upload your picture here” (see figure 2), we add a “microflow” to trigger another page where one can actually upload and submit the picture (see figure 3). Mendix uses the so-called microflows to create the logic behind the pages. 

home page for the age estimation app editing mode studio view

Figure 11: home page for the age estimation app (editing mode, studio view))

Figure 12: Pop up page with file upload widget (editing mode)

This is simply a page with a standard widget called “image uploader” and some lay-out customizations. You can drag and drop it onto the page. Not much coding to be done here. 

Clicking the button “upload your picture here” triggers the microflow that can be seen in figure 13. This microflow first creates an object in the entity “Photo” (see domain model), then opens the pop-up to upload a photo (see figure 12), and then closes the page once done. For more detail on microflows see step 3.

Microflow to open and close the“upload image pop-up

Figure 13: Microflow to open and close the “upload image” pop-up 

 

Step 3: Create microflow to call UbiOps API

Once a user clicks “submit” in figure 12, a new (and more advanced) microflow is triggered to encode the image, call the UbiOps API and return the response value. In this step, we explain how that microflow works. 

Overview of microflow to call the UbiOps API and return the result

Figure 14: Overview of microflow to call the UbiOps API and return the result

Mendix must first “commit” the image, in other words, “save” it in the temporary database of our app. The next step is to encode the image into a string. This is required because Mendix cannot send files with the REST request functionality. So we used the “Base64 encoder” from the Community Commons Functions Library. The input is the entity Photo, whereas the output is a string stored in the variable “photo” attached to the entity. 

Figure 15: base64 encoder

 

This means that we now have a variable called “photo” in the Mendix database, with as value the base64 encoded string of the original image.

Now we are ready to set up the REST API post call to call the API endpoint of the deployed deep learning model in UbiOps. Figure 16 shows the Mendix ‘Call REST’ settings on http method (post), authentication, http headers, the request and the response. You can paste here the API URL of the model in UbiOps (starting with https://api.ubiops.com/v2.1/…) Going through all the details is beyond the scope of this article. 

Figure 16: REST API call to UbiOps details

 

igure 17: HTTP Headers configuration for the Call REST module

 

In the HTTP Headers tab, we add the API token from UbiOps to the HTTP header for the Mendix request. The value should include the ‘Token’ keyword in the string as shown in the image.

It is important to note that the response from UbiOps is stored in a new variable called “age”. See figure 18 for the configuration.

 Figure 18: configuring the response from UbiOps and storing it in a new variable “age”.

 

Note that in the domain model (see Figure 20) you can already see “age” as a variable in the entity “Photo”. Storing the returned value for “age” is done in the “change photo” step. This is similar to the first “change photo” step (see the microflow in figure 14).

To correctly map the return value from UbiOps and store it in the variable “age”, we create an import mapping. This will select the right variable from the HTTP response. The JSON format as configured in Mendix is shown here: 

JSON format as configured in Mendix

Figure 19: JSON format as configured in Mendix

 

the import mapping to correctly map the response from UbiOps

Figure 20: the import mapping to correctly map the response from UbiOps 

 

Consequently, the last step in the microflow is to trigger a page to pop up with the value of the response (see figure 22). This means we need to create another page, with drag-and-drop elements, to which we pass the response value. 

show the result page (3) and pass the entity Photo

Figure 21: show the result page (3) and pass the entity Photo

 

Step 4: Create a page to display the result 

As seen in figure 21, a page must be created that displays the variable “age” somewhere. By dragging and dropping the third page is created, and the selected data source (the variable “age”) is selected. 

Display of the response from the call to the UbiOps API.

Figure 22: Display of the response from the call to the UbiOps API. 

 

Step 5: Publishing the Mendix application

Last but not least, we will publish the application and ensure it’s accessible for the public by managing the permissions. See the mendix docs for more information on how to do this. 

 

Wrapping up 

With two free-to-use platforms, an open-source AI model and limited knowledge of programming or software development, you can create your own AI-powered application. We did not focus on the performance of the deep learning model for this project, so you might find yourself to be estimated 20 years younger than you actually are. However, the aim is to illustrate the ease of building an end-to-end application with just two platforms and some knowledge of Python.

Would you like to build something similar? Feel free to create a free account with Mendix and UbiOps. In case of any questions, let us know. We hope you enjoyed this read and hopefully, we gave you some inspiration for your own project! 

Latest news

Turn your AI & ML models into powerful services with UbiOps