New UbiOps features November 2022

By 23 November 2022Product update, UbiOps

UbiOps release news – version 2.20.0

On the 24th of November 2022 we released our latest version of UbiOps Saas with a lot of new functionality and improvements. The major new feature in this release is the addition of pipeline operators, which opens up a ton of new options for using UbiOps pipelines. An overview of the newly added pipeline operators with explanatory screenshots are shown below.

We have also prepared a release demo for you!

✔️ Conditional logic operator

This operator allows you to conditionally trigger the next object in your pipeline. This can be helpful for implementing things like A/B tests or when you have models in your pipeline that should only run under specific conditions.

How to use it:

Just add the operator to your canvas and provide a conditional expression, for instance, age<= 18 and define any expression variables, in this case, that would be age of type integer .

When you press the save button it will be saved to your canvas and you can connect it to other objects in your pipeline.

You need to make a connection that can provide the expression variables as input for the operator. Next, connect the operator as a source to the pipeline object you want to conditionally trigger, in combination with a source object that provides the actual input variables to that object. The conditional operator will not pass any data by itself.

✔️ Raise error operator

Sometimes you might want to raise an error and stop your pipeline if your input data doesn’t meet certain conditions. Well that’s exactly what the raise error operator can help you with! It allows you to trigger a custom error message and stops the current active pipeline request.

How to use it:

Add it to your canvas and define a custom error message. Typically you would want to trigger an error only if a certain condition is met so it makes the most sense to connect it to a conditional logic operator that evaluates if your error condition is met.

✔️ Boost pipeline throughput with the create subrequests operator

Do you have a part in your pipeline that could benefit from parallelization? The create subrequests operator can help you out. It will split the output of a previous pipeline object into batches of requests for the next pipeline object. This object will now process these batches in parallel by scaling up to the maximum number of instances set for this object. This can significantly speed up your pipeline!

How to use it:

To make use of the create subrequests operator you should connect it to a pipeline object that returns a list of output dictionaries, as opposed to a single output dictionary. Let’s say you have a video processing pipeline where you want to process each frame in parallel. In this scenario you would have a pipeline object that splits the video into frames. If you make sure that the request function in the of that deployment outputs a list of dictionaries, as shown below, the create subrequests operator will send all items in the list as separate requests to the next pipeline object:

def request(self, data):
         <some code to split the video into frames>
         return [{'frame': 'frame1.png'}, {'frame': 'frame2.png'}, {'frame': 'frame3.png'}, ...]

You can use the create subrequests operator to split that list into subrequests that can be processed individually by the next object in your pipeline. For a full example, see the documentation.

✔️ Aggregate all processed subrequests with the collect subrequests operator

If you use the create subrequests operator to parallelize a certain part of your pipeline, you’ll typically also want to aggregate all results again at a later stage. The collect subrequests operator can help you to aggregate all your subrequests into a single list of data items that can be passed on to the next object, stopping the parallelization.

How to use it:

The collect subrequests operator needs to know where to send the collected subrequests to, so it is important to add your destination object to the pipeline before adding the operator. For this destination object to have access to the full list of collected data from the operator, it will need a requests function in its, as opposed to the standard request function.

If the API detects a requests function in your deployment code, it will send a list of dictionaries as the data input, as opposed to the standard single dictionary.

If we take the video processing example again, and we have now processed each frame and want to compile it back to a video, the requests function of our compiler should look something like this:

def requests(self, data):
        input_array = []
        # iterate over input data
        for item in data:
        # < code to put frames back together >
        # In the return statement we return a single video
        return {"processed_video": processed_video}

✔️ Count subrequests operator

If you are working with subrequests, it might come in handy to know how many subrequests are present at a certain point in your pipeline. In that case you can use the count subrequests operator.

How to use it:

Just connect the operator to the object of which you want to count how many subrequests there are! The operator will output an integer with the result.

✔️ Function operator

The function operator allows you to manipulate data fields with Python expressions before sending them to the next object. This can for instance be helpful in quick unit conversions.

How to use it:

This operator is quite similar to the conditional one. You just add it to your canvas, provide the expression and define your expression variables.

For the function operator, you can also change the datatype of the output field. The output field is always called /output. After adding it, you can connect it like any other object in your pipeline.

Have fun building pipelines in UbiOps!

Those are all the operators we added! Do you need more information on how to use them? Head over to our documentation, or send us a message.