Instantly scale AI and machine learning workloads on GPU on-demand
Functionality Technology UbiOps
January 12, 2023 / January 8, 2024 by UbiOps
How to speed up a Tensorflow model by 200%? Machine learning models nowadays require more and more compute power. According to a study from OpenAI, the compute power needed to train AI models is rising ever since it was first used in the 60’s. With the required compute power doubling every two years up until […]
Read more »
Tagged
Blog Functionality Technology
March 28, 2022 / July 26, 2023 by UbiOps
I wrote an article on how you can improve neural network inference performance by switching from TensorFlow to ONNX runtime. But now UbiOps also supports GPU inference. We all know GPU’s can improve performance a lot but how do you-get your ONNX model running on a GPU? And should I run all of my neural […]