Instantly scale AI and machine learning workloads on GPU on-demand
Functionality Technology UbiOps
January 12, 2023 / January 8, 2024 by UbiOps
How to speed up a Tensorflow model by 200%? Machine learning models nowadays require more and more compute power. According to a study from OpenAI, the compute power needed to train AI models is rising ever since it was first used in the 60’s. With the required compute power doubling every two years up until […]
Read more »
Tagged
Blog Functionality Technology
March 28, 2022 / July 26, 2023 by UbiOps
I wrote an article on how you can improve neural network inference performance by switching from TensorFlow to ONNX runtime. But now UbiOps also supports GPU inference. We all know GPU’s can improve performance a lot but how do you-get your ONNX model running on a GPU? And should I run all of my neural […]
December 8, 2021 / January 3, 2024 by UbiOps
Some time ago I wrote an article about comparing the performance of the TensorFlow runtime versus the ONNX runtime. In that article I showed how to improve inference performance greatly by simply converting a TensorFlow neural network to ONNX and running it using the ONNX runtime. I didn’t explain what ONNX actually is, this article […]