GenAI: Why were 1 million users using OpenAI ChatGPT in its first 5 days?

What is the new chatbot ChatGPT and what sets it apart from other chatbots?

In November 2022, OpenAI released ChatGPT, an artificially intelligent chatbot that can answer just about any question and can be interacted with for free. ChatGPT is an implementation of the powerful GPT-3 transformer model, a generative artificial intelligence (generative AI). This subclass of AI is all about creating new and original content. While not the first of its kind, the release of ChatGPT marks an important milestone in the field of generative AI. According to the CEO of OpenAI, Sam Altman, ChatGPT had already helped over one million different users obtain unique answers to their prompts and questions, not even one week after its release! What makes ChatGPT unique compared to other chatbots? And what is the cause of its instant popularity?

Generative AI is a powerful tool for creators

Generative AI is a type of artificial intelligence that can be used to create a wide variety of content, including text, images, music, and more. Its ability to generate very large amounts of content quickly makes it an interesting type of AI to research. It can save businesses and organisations the time and resources to create content from scratch themselves. This technology can also potentially (help) produce creative output that would be difficult or impossible for humans to generate on their own. Additionally, Generative AI algorithms can produce personalised content that is tailored to an individual’s preferences or characteristics and be used to augment or supplement a dataset, which can be useful for tasks such as training machine learning models. Generative AI has the potential to revolutionise a wide range of industries and applications, from content creation to personalised customer experiences.

What can we use (and are we using) generative AIs for?

A lot of the potential that generative AI has is already being realised! Over the summer of 2022, DALL-E Mini (now available as Craiyon) went viral as a great way to quickly generate funny images. Try it out! That’s a fun application, but there is no shortage of more serious applications, either. For example, IBM uses their AI RoboRXN to aid in material synthesis research. This AI is able to come up with a process to synthesise any molecule, even if this molecule has never been synthesised before! Another example: NVIDIA has AI upscaling in their SHIELD TV. It can make your HD movies look great on your 4K screen, that is also generative AI!

ChatGPT is an implementation of the GPT-3 model, which has a Transformer architecture. A transformer model is a generative AI often used in the field of Natural Language Processing (NLP).

This interdisciplinary field combines knowledge of computer science, linguistics, and AI to program computers to process, create and analyse natural language. The field of NLP is rapidly evolving with many active research areas. The use of neural networks, particularly transformer-based models such as BERT, GPT-3, and RoBERTa, has greatly improved the performance of NLP tasks such as language understanding and text generation. For understanding the hype around ChatGPT, knowing what GPT-3 is, is especially interesting. In their 2020 paper Language Models are Few-Shot Learners, researchers at OpenAI introduced a language model with 175 billion parameters that performs very well on various language related tasks like translation, question-answering and reasoning. It is this family of language models that ChatGPT is based on.

Now, chatbots are not new.

Cleverbot has been online for almost 25 years! But you’d be hard-pressed to find one as good as ChatGPT. A chatbot that runs on GPT-3.5 and uses Reinforcement Learning with Human Feedback to continuously become better at its job. It is extremely human-like in how it converses. Unlike its predecessor, InstructGPT, ChatGPT can admit its mistakes, ask followup questions and answer just about any question you ask it. It can even help you code! You can try it out here, but don’t be surprised if you can’t get in. The site is often at capacity.

ChatGPT’s creators do provide us with a word of caution. It can sometimes give you nonsensical answers that (dangerously) seem plausible. I experienced this myself. When I asked it to provide me with sources along with the answer to my question, it confidently provided me with three titles of non-existent papers. There’s a list of problems associated with ChatGPTs answers: its answers are wordy, sensitive to phrasing of your question, can be biassed, and so also incorrect. Finally, I let ChatGPT explain another one of its limitations:

 

It is not exactly up to date on recent events. In some cases, this can lead to it providing you with outdated information.

Why is ChatGPT such a milestone for generative AI?

So given these limitations, and given that we know ChatGPT is not a first-of-its-kind chatbot, why did it become instantly popular? To put it simply: it’s still extremely good and much better than what we had before.

Why is that? What can ChatGPT do that its predecessors couldn’t? One key improvement is its level of context retention. It is theorised that ChatGPT can hold up to 6144 words of context in its memory, over four times more than its competitors. Where previous chatbots would quickly “forget” information you gave it just a minute ago. ChatGPT remembers the context of the conversation you’re having with it for much longer. This allows you to ask it followup questions, ask it to improve its answers and iterate on them by suggesting changes to its output as your conversation goes on. This AI expert used ChatGPT to create an entire AI application in AWS, just by conversing with it and asking it to write code snippets. This is a level of human-machine cooperation that is entirely new, and only available now that we have access to ChatGPT.

In addition to having a better memory, ChatGPT’s language model also has over ten times the amount of model parameters than its closest peer.

This means that the ideas it can compute can be much more intricate and sophisticated than what was possible before. It’s been reported that students have started using ChatGPT to complete their writing assignments. A Princeton college student even went as far as starting development of an app that can detect whether a piece of text was written by the chatbot or not. But there are also more responsible uses for ChatGPT in education. Teachers can use it to create assignments, give students feedback and find ways to give personalised attention to specific students. In fact, there are many fields in which ChatGPT can be of great help. It can write sales emails, caption photos, generate marketing strategy ideas, and write interesting job descriptions. This is just a sample of what you can do with ChatGPT. The possibilities are seemingly endless.

Will ChatGPT take my job?

So ChatGPT can write like a human, write code and change its answers based on specific needs. So will ChatGPT take your job? The short answer is: probably not. But the long answer is a bit more complex. While the possibilities do seem endless, the chatbot output cannot be blindly used. It can help, but it cannot take over. While it can write an essay, it cannot fact check itself. While it can write code snippets, it cannot engineer structured and maintainable codebases. These things still require humans. ChatGPT is a tool to cooperate with. Maybe that will lead to fewer people being necessary to perform certain tasks?

Conclusion

To answer the question: over one million people started using ChatGPT immediately. Simply because it is a powerful tool that can generate ideas and kickstart your writing process. It is the first chatbot to truly pass the threshold of being a useful piece of technology, instead of a gimmicky toy to play with. It will help people in many professions with all kinds of different projects. However, It will not take over the need for human intervention. Not yet, at least.

Running Transformer models on UbiOps

Are you looking to use generative AI yourself? It’s easy to deploy available Transformer models on UbiOps for inference purposes. These models run best on huge GPUs like the Nvidia A100, which are expensive.

The UbiOps platform can help you save costs with its on-demand GPU functionality, so you only pay when the model is active. This can decrease cloud costs tremendously, especially for models that require expensive hardware.

 

Latest news

Turn your AI & ML models into powerful services with UbiOps