Example uses for GPUs in Machine Learning 

What type of applications would benefit from using GPUs?

Graphical processing units (GPUs) have been around for decades and were originally used for gaming, graphics rendering and more recently for the mining of bitcoins, in the last decade, their use has extended to Machine Learning (ML) too. Their ability to process tens of thousands of parallel threads to rapidly solve large problems with substantial inherent parallelism means they are suited for data science applications as well. In other words, GPUs are good at doing many simple calculations at the same time. CPUs on the other hand sacrifice the high throughput for lower latency but can do a larger variety of tasks. GPUs. Now you’re probably asking yourself: what applications are the best fit for GPUs? This blog will discuss exactly that. 

The three topics I want to cover in this article are:

  • Deep Learning
  • Computer vision
  • Natural Language Processing (NLP)

For each topic, I will give a short explanation about what it is, and list some examples.

Deep learning and GPU’s

The first topic I want to go over in this blog is deep learning, which is a subset of machine learning. If you want to train a machine learning model, you will have to start with a set of predefined features before the model is able to learn from it. With deep learning this isn’t necessary, as it does the feature extraction and classification on its own. 

Machine Learning Deep Learning and GPUs

Deep learning and GPU’s

Source: https://xd.adobe.com/ideas/principles/emerging-technology/what-is-computer-vision-how-does-it-work/

This is why deep learning is very suitable for tasks where you want to extract higher representations and features from data automatically that are hard to describe manually. Like in image recognition where the neural network basically learns the important features itself.

Let’s take a fruit sorting machine as an example, that needs to sort apples and pears. If this machine were to run on a machine learning model, we first must explain the different features to the model that differentiate apples and pears, like the shape, weight, color, and so forth. A deep learning model does not need this prior knowledge. It is able to sort the apples and pears by looking at them and learning the features without human interaction. To do this deep learning models require a lot more data to learn from than machine learning models. Where a machine learning model works with datasets that consist of thousands or tens of thousands of data samples, a typical deep learning model works with datasets with millions of samples. This is also reflected in the training time: where training a machine learning model takes less time due to smaller data set sizes, a deep learning network takes a huge amount of time due to the very big data points 

Nowadays Deep Learning models are everywhere,

Facial recognition on phones for example all run on Deep Learning models. In fact, the two other topics I want to talk about in this article would not be possible without Deep Learning. Computer Vision for example typically uses Convolutional Neural Networks (CNN) models, and the voice assistants I mentioned also heavily rely on deep learning models. Deep learning networks are also being used to diagnose patients. Researchers at Mount Sinai Icahn School of Medicine developed a deep learning model that can detect acute conditions like strokes, hemorrhages etc. 150 times faster than humans, the model was able to detect the disease in 1.2 seconds!

Computer Vision and GPU’s

Computer vision allows devices to have human-like vision by giving them the ability to perceive objects or patterns in images or videos and to use that information for further processing or decision making. This is similar to how our brain works. A computer sees an image as a series of pixels. The brightness of each pixel is represented by a single 8-bit number. These numbers range from 0, which is black, to 255, which is white. This gives a Computer Vision model the ability to  to distinguish (256x256x256 rxgxb) 16,777,216 different colors, while the average number of colors humans can see is around a million.

Think about cars that have an auto park function. When the driver “tells” the car it wants to park somewhere, the car can check for spaces to see where there is enough space and park the car without making a scratch. All thanks to Computer Vision. Bosch is even making an automated valet parking, which uses cameras to guide cars into an empty spot and park the car, without any human input. 

Computer Vision and Agriculture

The automotive industry is not the only industry that makes use of computer vision. In agriculture for example, computer vision is used to help with:

  • The monitoring of the state of crops
  • The sorting and grading of crops
  • Pesticide spraying with drones
  • Researching a plant’s characteristics and livestock farming.

In order for AI to be able to do the things mentioned above, it typically relies on a Convolutional Neural Network (CNN), which is an example of deep learning.

The model helps the computer understand the context of the visual data, and then the CNN helps break the images down into higher level data representations which are then given tags or labels. These tags or pixels are used to distinguish meaningful features in the image in an attempt to classify them. Exactly  like the human brain does, to make predictions. Popular computer vision tools are: OpenCV, TensorFlow, CUDA, Keras and YOLO.

Natural language processing (NLP) and GPU:

Where computer vision allows devices to obtain human-like vision, Natural Language Processing (NLP) gives devices the ability to read text, hear speech, interpret it, and even measure sentiment like humans can. So now the AI model is the brain, and NLP its ears.

These days if you contact the customer service of a big company, the first thing that pops up is a chat box, with someone introducing themselves with: “Hello, my name is Bob. I’m a ‘Company name’ chatbot, how may I assist you?” After replying with: “My order is too late.”, the chatbot is then able to derive from your message that you need help from the order & shipping department and connects you.

https://monkeylearn.com/natural-language-processing/

This type of interaction would not be possible without NLP.

 NLP solutions made it possible for chatbots to sound more human-like. Another use of NLP is the classifications of emails. When you receive an email, it is classified as spam, promotions, or your primary inbox. This is done by scanning the content of the email: if the email contains things like coupons or discount offers, it is probably a commercial or promotional email. When the email contains irrelevant messages and/or vague outbound links, the email is most likely a spam email. The AI can classify the email based on text content. Voice assistants like Siri, Alexa or the Google assistant also rely on the use of NLP technology. These voice assistants can understand, respond and then perform tasks based around the voice commands. Voice assistants can also learn from human interaction, if Siri for example doesn’t pronounce a name correctly, and you say “That’s not how you say that”, Siri will then ask you how the name is pronounced correctly. After which Siri will give you a couple of pronunciation options so you can pick the one that sounds the best.

So how does NLP actually work?

Well, It does not start to process the text right away. First four processing steps are required for the data before it can be processed by the NLP model:

  Tokenization: the speech or text is broken down into individual words or clauses, called tokens.

  Stemming: reduce the word to its root form (i.e searching becomes search).

  Lemmatization: the input is reduced to its base form, called the lemma (you can find this in the dictionary).

  Stop word removal: filtering out words that add little or no information, such as prepositions for example.

How NLP actually works

Source: https://medium.com/predict/how-does-nlp-pre-processing-actually-work-8d097c179af1

After this the NLP tool is able to turn the text or speech into something the machine can understand. Therefore, the machine can take action accordingly.

What applications are best suited for GPUs in Machine Learning? 

In general, applications that have a high parallelism, large computation requirements, and where throughput is more important than latency are very well suited for GPUs. For image processing applications this is very useful. Simply because GPUs are able to process millions of pixels in an image to help the model understand the image. Giving the model the ability to “see” like humans can, and perceive objects for patterns in images or video frames. Or NLP where an AI model is given the ability to understand speech, interpret it and take actions accordingly. But the most important applications that GPUs enable is deep learning, where a model is taught to learn by example. Deep learning in its turn helps most modern computer vision and NLP applications to solve problems. Simply by enabling models to receive eyes by using computer visions, or ears by using NLP.   

With the rising demand for GPUs, they have become scarce.

If you’re interested in one but are unable to get your hands on one. You can read this article a colleague of mine wrote about how you can tackle this shortage.

Maybe it’s good to mention somewhere that in most modern CV and NLP applications, deep learning is often used to solve the problem. Now it sounds a bit like they’re three different things.

Latest news

Turn your AI & ML models into powerful services with UbiOps