Top 6 current LLM applications and use cases

We discussed how to classify a Large Language Model (LLM), so let’s talk about the different ways LLMs can be used in the real world. The potential applications of LLMs are countless, and their limits have yet to be crossed. However, this article should give you a general idea of some of the ways LLMs are being used. 

In general, LLMs should be seen as assistants and not as human replacement. Human review and supervision are essential to getting effective use out of LLMs. In this article, we will look at the following use cases:

  • Sentiment analysis
  • Chatbot
  • Content generation
  • Summarization
  • Text classification 
  • Cybersecurity

What is an LLM application?

An LLM application is a method to improve or innovate work using LLMs. LLMs are particularly adept at Natural Language Processing (NLP), which is a process in which machine learning is used to understand and generate human language. While LLMs are an innovative technology, questions may remain about the scope of their usage in a professional context. We will, with these 7 examples illustrating LLM use-cases, attempt to answer that question. 

How is an LLM application built? Due to their large size, LLMs are typically deployed locally or in the cloud and called using an API endpoint. In very simple terms: a user can provide a prompt via an interface which is sent to the model and processed. A response is then returned to the user. This can be done seamlessly using UbiOps, allowing LLMs to be easily incorporated into websites or other platforms. We have dedicated guides on how to deploy Mistral-7b, BERT, LLaMa 2 and Falcon

Now let’s get into the applications!

Sentiment analysis with LLMs

Sentiment analysis designates the process of evaluating text and placing it into a certain emotional category. The most basic emotional categories are positive/negative/neutral, but more complex categorizations exist, ranging from disappointment to annoyance. An LLM which is very well suited to this use case is BERT

What is BERT?

BERT is an encoder-only transformer model developed by Google in 2018. Importantly, it is relatively cheap and can be deployed easily using UbiOps. Here are some of its core characteristics (read our previous article to learn more about how to interpret these characteristics):

NameNumber of parametersTypeLicense
BERT110M, 336MEncoder-onlyapache-2.0

BERT characteristics

BERT can be used as a language classifier. This is why it is well suited for sentiment analysis. Now let’s explore some concrete examples, looking at use cases in financial services and customer service analysis

Sentiment analysis use cases

Financial services

Sentiment analysis models can be used to analyze Financial headlines. This can help in predicting financial trends. To this end, a fine-tuned version of BERT named FinBERT is an obvious candidate. FinBERT was fine-tuned on a large financial database, allowing it to effectively analyze whether a financial news headline is positive or negative. Here are two examples:

Apple CEO states that the company is struggling.96.6 % Negative, 2.4% Neutral, 1.0% Positive
Hurricane strikes Ford Headquarters!72.4% Negative, 25.1% Neutral, 2.5% Positive 

FinBERT prompt examples

If you want to learn more about LLMs and finance, we have a dedicated guide on how to run a stock market index prediction model with UbiOps

Customer service analysis

In customer service, analyzing the plethora of issues and concerns raised by customers can be arduous if done manually. However, with LLMs, you can easily analyze customer service requests and classify them based on emotion or urgency. A potential model for this use case is a model based on BERT named roberta-base-go_emotions which classifies text into 28 different emotions. Here are two examples:

This product sucks and I want a refund nowAnnoyance 56.2%, Anger 27.1%, Disapproval 7.8%, Disappointment 6.8%…
This product worked ok until the screen stopped working.Neutral 42.7%, Approval 37.1%, Realization 11.6%, Disappointment 7.9%…

RoBERTa-base-go_emotions prompt examples

Using an LLM as a chatbot

The most widely known usage of an LLM is to use it as a chatbot. Chatbots are especially useful for companies wanting to use LLMs as helper tools for internal use, i.e. employees, or external use, i.e. customers. Chatbots can be fine-tuned and refined on internal data, allowing you to create a chatbot specific for your use case. In general, chatbot LLMs are flexible and smart, making them a great helper tool. Currently, the best, most cost-effective pre-trained LLM model series is Mistral

What is Mistral?

Mistral is currently the best bang for your buck LLM, meaning that it is both cheap and effective. The model is extremely knowledgeable and has performed very well on a variety of tests. It is a conversational AI, which means that it is designed to simulate human conversations. It’s also completely open source and usable for commercial purposes. You can deploy it easily using UbiOps.

Model seriesNumber of parametersTypeLicense
Mistral 7B, 46.7BDecoder-onlyApache-2.0

Mistral characteristics 

Two ways to use chatbot LLMs such as Mistral are as a customer service agent or as a question answerer. You can make your chatbot even more relevant by fine-tuning it with internal data or by implementing RAG. These methods can help create a chatbot which is specifically designed for your use case. Mistral, again, is a good model for fine-tuning given that it has performed well on a variety of general logic and intelligence benchmarks. We wrote a guide to help decide when to fine-tune your LLM where we explain this further. 

Chatbot use cases

Customer service agent

Many companies are now using LLMs as customer service agents. Major banks such as ING are already incorporating chatbots into their customer service environment. They basically help human agents by reducing load, potentially identifying what a customer’s issue is and guiding them to the correct contact source – all before human review. Here is an example conversation using Mistral:

Example conversation with a customer service agent using a Mistral model deployed on UbiOps

Question answering

Question answering chatbots, especially when fine-tuned on internal or industry specific data, can have a variety of applications. For instance, Apple is reported to use a custom chatbot for internal use. In general, a company or industry specific LLM can be a useful tool to help employees or customers find answers to questions. Here is an example using Mistral:

Example conversation with a question answering chatbot using a Mistral model deployed on UbiOps

Using LLMs for content generation

LLMs are also useful for content generation. Companies, either due to licensing or content creator cost, can use LLMs to generate copyright free content. While this aspect of LLMs is still in its infancy stage, it is progressing rapidly and LLMs can be used to optimize image-creation and generate source code

For image generation, diffusion based models are the way to go. While diffusion models aren’t made for NLP, when used in tandem with an LLM, you can optimize your image generating performance. 

What is a diffusion model?

Diffusion models, according to the paper Diffusion Models: A Comprehensive Survey of Methods and Applications, are a “family of probabilistic generative models that progressively destruct data by injecting noise, then learn to reverse this process for sample generation.” Diffusion models are perfectly designed for image creation, and many LLMs have been made to optimize their generative capacity. 

Content generation use cases 

Image generation

The most popular image generation model to date is Stable Diffusion. It can generate high quality images very quickly. These don’t have any license fees as they are copyright free. Companies wanting to evade licensing costs can use a model such as stable-diffusion-2-1. We have a guide on how to deploy Stable Diffusion on our platform. As mentioned, you can use LLMs to enhance your prompt so it generates a better quality image. We will be using MagicPrompt to do so, which is based on GPT-2. Here is an example:

PromptLLM-enhanced promptResult
Quaint train station during autumnQuaint train station during autumn, highly detailed, digital painting, concept art, sharp focus, illustration, detailed, warm lighting, cozy warm tint, magic the gathering artwork, volumetric lighting, 8k, no gold, no gold colours, art by Akihiko Yoshida, Greg Rutkowski

Example of a stable-diffusion generated image with prompt enhancement using MagicPrompt

If you are interested in learning how to couple AI models together, such as in the table above, you can do so easily using pipelines within UbiOps.

Code generation

Code generation LLMs are taking off as well. Tools such as GitHub Copilot and ChatGPT have been designed with code generation in mind, and they have caught on in the developer world. According to a GitHub blog post which surveyed 500 developers from US-based companies with 1000+ employees, around 92% of the developers surveyed were using AI coding tools. This number is extremely high and could be expected to increase as AI coding tools get smarter and smarter. 

The widespread use of AI tools raises some concerns, especially if companies rely on closed-source models. Data privacy concerns have been raised with ChatGPT, and many companies have banned the use of ChatGPT internally. If you want to deploy or use AI models, it is important to make sure that the companies you partner with are GDPR-compliant. 

Here is an example of code generation using Mistral:

Example of a code generation assistant using a Mistral model deployed on UbiOps.

Using LLMs for text classification

Retrieving core elements of a body of text is a useful application for any fields needing . In essence, models which are designed for text classification try to place text into a certain category. In general, zero-shot classification is useful for this use case as it is very flexible. BART is a well-suited model for this.

What is BART?

BART is an encoder-decoder model developed by Facebook in 2019. For zero-shot classification, Facebook released bart-large-mnli which was trained on a Natural Language Inference (NLI) dataset. Models trained for NLI can easily be used as zero-shot classifiers.

ModelNumber of parametersTypeLicense

BART-large-mnli characteristics

What is zero-shot classification?

Zero-shot classification is a type of classification which enables a user to prompt a model with custom categories. This means that you can define categories at inference time. Zero-shot classifiers are useful if you envisage calling the model in different ways and using different category lists.


Text classification use cases

Search optimization 

Using an LLM can help optimize your searches. Here is a simple example: if you offer a search engine for articles or blog posts, you can use a zero-shot classifier model to retrieve the relevant tags from a search prompt and match it with articles with the same tags. Here is an example of how it could be used:

Make a custom LLMfine-tuning, language models, training, instruction tuning, business, carslanguage models 94.5%, fine-tuning 79.3%, business 78.7%, instruction tuning 36.3%, training 12.6%, cars 0.2%
Iphone crack in screenurgent, not urgent, phone, tablet, computer, fix, guidephone 99.7%, fix 59.9%, urgent 59.4%, guide 17.2%, not urgent 2%, computer 0.1%, tablet 0%

BART-large-mnli zero-shot prompting for search optimization

Question sorting

If your platform has some sort of feedback form, sorting through the responses can be done effectively using a zero-shot classifier. For instance, if you have different teams dealing with different issues, a zero-shot classifier could allow you to determine which team is best for answering certain questions. Here are a couple of examples, again using bart-large-mnli:

Scroll bar is too fast on homepagefrontend development, backend development93.0% frontend development, 7.0% backend development 
http request returns 500 errorfrontend development, backend development56.2% backend development, 43.8% frontend development

BART-large-mnli zero-shot prompting for question sorting

While the results here may seem underwhelming, remember that you can fine-tune your LLM or use RAG to make it specific to your company/industry. BART-large-mnli is a general use zero-shot classifier.

Using LLMs for summarization

Summarizing text is another task that LLMs perform well. Summarizing involves compressing a lengthy piece of text down to its essentials. BART, being an encoder-decoder model, is built for text comprehension and generation and it is well suited for summarization.

Summarization use cases

Media digestion

Reading the news every single day, especially if you want to be an informed reader and read a variety of outlets, can be a time-consuming task. News summarization services aim to solve this issue. A good model for this task is bart-large-cnn. A model based on BART and fine-tuned on the CNN/DailyMail database.  

Here is an example of its summarization capabilities. I provided bart-large-cnn with a BBC article titled: Apple watch: US upholds ban on sale over patent dispute. It summarized the article as follows: 

Apple was earlier allowed to sell its Series 9 and Ultra 2 watches while proceedings were ongoing. But a US appeals court has since reversed that decision. It is the latest turn in a dispute between the firm and medical technology company Masimo. Masimo and spin-off Cercacor have accused the iPhone maker of poaching key staff.

Academic article summarization

In research or academia, you can use AI to generate a summary of a lengthy article and quickly check if it contains any relevant information. It can help ascertain whether you should spend the time to read an entire article. An example of a model specifically developed for long form articles or books would be led-large-book-summary, a model based on BART, or bigbird-pegasus-large-pubmed, a model developed by Google based on a unique architecture. Here is an example of how it works: we can give led-large-book-summary the first section of a chapter titled “THE EU AND ALGERIA: A temporary marriage of convenience?”, then it responds with a summary:

This chapter discusses the contour of the EU-Algeria relations and its relationship with Algeria. The EU is Algeria’s main trading partner and Algerian exports to the EU have grown significantly over the years 2002-2014. However, trade will begin to decline in both 2019 and 2020 due to domestic demand and a subsequent decline in exports due to a gas crisis. Europe has been interested in Algeria’s natural gas exports since the outbreak of the War in Ukraine in March 2014. Former Italian Prime Minister Mario Draghi, French Prime Minister Giorgias Meloni, and French President Emmanuel Macron all visited Algeria to urge it to increase gas exports. The author argues that the EU/Algeria relationship will be short lived due to the limited Algerian hydrocarbon export capacity and because of divergence of views on issues important to the Algerian and European governments.


LLMs have some unfortunate consequences when it comes to cybersecurity, given that the massive chat models such as GPT-4 and Mistral are trained on large sections of the internet. Through their vast training set, they learned hacking techniques as well as fishing techniques. However, they also possess capabilities when it comes to cyber defense.

LLMs for cyber offense

Following the release of GPT-4, the Institute of Electrical and Electronics Engineers (IEEE) released a research paper documenting the ways GPT-3 could be used for cyberwarfare. According to this paper, ChatGPT (when prompted using jailbreaking techniques) can give detailed information on how to perform attacks using:

  • SQL injection
  • Phishing/social engineering
  • Ransomware
  • Spyware
  • Trojans
  • Polymorphic malware. 

Given how eloquent LLMs can be when writing an email, detecting phishing attacks will become increasingly difficult as they increasingly resemble a legitimate source. Furthermore, LLMs can generate attacks far faster than humans, potentially increasing the rate of attacks. Let’s take a look now at the ways LLMs can be used for cyber defense.

LLMs for cyber defense

According to the same paper, GPT-3/4 can be used to:

  • Read server logs and potentially detect vulnerabilities
  • Keep you up to date on the cybersecurity industry, detailing new threats and malware that others have identified
  • Detect vulnerabilities in code or generate secure code
  • Detect unusual activity or patterns 
  • Identify malicious code

Therefore, while the potential LLMs have for cyber offense is scary, its potential for cyber defense is also powerful. 


The usage cases for LLMs are vast and continue to expand with the release of each new model. In this article we provided you with several examples of how LLMs can be used: sentiment analysis, question answering/chatbot, content generation, summarization, text classification and cybersecurity. This is in no way the limit to their usage, however these categories represent the main ways LLMs are being used today. As the quality of LLMs improves, new and innovative uses will be discovered. So don’t miss out! Deploy an LLM application today with UbiOps and find out how easy it can be to integrate LLMs into your products and services with the right platform to support you!

Latest news

Turn your AI & ML models into powerful services with UbiOps