LLMs in Production Conference 2023 – The Power of Data

Our takeaways from the LLMs in Production conference

On Thursday last week, we attended the LLMs in Production conference, hosted digitally by the MLOps Community. Large Language Models (LLMs) and Foundational Models (FMs) have become the talk of the town in the world of artificial intelligence. However, these models are only as good as the data they are trained on, making data the key to their success. During this conference, speakers discussed the importance of well-structured data, the barriers to training and deploying LLMs, and the potential solutions to these challenges.

The Importance of Data for Large Language Models (LLMs) and Foundational Models (FMs)

LLMs and FMs have one thing in common: they both require vast amounts of high-quality labeled data. The more data a model has access to, the more accurate it becomes in predicting outcomes. However, labeled data is even better since it helps the model learn from labeled examples of desired outcomes.

New industries are beginning to leverage LLMs and FMs for their respective use cases. For example, the insurance and legal industries are beginning to use LLMs for underwriting and legal case analysis, respectively. Cameron Feenstra from Anzen explained the process that Anzen uses to streamline insurance underwriting with LLMs. As these industries continue to adopt LLMs, we can expect to see more specialized models being developed to cater to the specific needs of these industries.

Two main themes emerged from the conference: the barriers to training LLMs, and the barriers to deploying them.

Barriers to Training LLMs

Training LLMs can be challenging due to the manual task of data development and labeling, which is difficult, expensive, and unscalable particularly for private, fast-changing data that requires subject matter expertise. Data scientists spend a significant amount of their time on this task, making it costly and time-consuming. This point was driven home by multiple speakers including Alex Ratner from Snorkel.ai. The move towards bespoke LLMs for different domains is not helping the situation since companies are struggling to adapt their data to the models.

Barriers to Deploying LLMs

Deploying LLMs is just as challenging as training them. The choice of whether to use an MLOps platform or build one’s own infrastructure depends on scale. It also depends on how crucial speed and latency are to a company’s business model. High frequency traders, for example, typically have stringent latency requirements that can only be achieved through a bespoke serving architecture.

But that doesn’t mean that every FinTech startup should start hiring DevOps resources. Software engineers have plenty of tips and tricks for incrementally increasing the speed of models. However, these methods do not need to be implemented manually. An automated platform can implement these hacks efficiently. For most machine learning practitioners, an MLOps platform will get the job done quickly, reducing costs and time-to-market.

The Future of LLMs

The future will bring MLOps tooling that enables personalized FMs trained on your data and workflows. Data is a more durable moat, and the last mile is where value is generated. According to Daniel Jeffries from the AI Infrastructure Alliance, LLMs will evolve into Large Thinking Models (LTMs), which will become increasingly vulnerable to misuse (bugs) as they integrate more into people’s working lives. One solution is to have LLMs generate thousands of possible responses to dangerous prompts and train based on those. However, a training revolution is needed to support this magnitude of processing, including model patching, adaptors like LoRA, and continual learning.

Final Thoughts

LLMs and FMs have become critical in the world of AI, and their importance will only continue to grow. To achieve the full potential of these models, companies must overcome the barriers to training and deploying them, and a data-centric approach is crucial to achieving this. Manual data inspection, interpretation, cleaning, and formatting is extremely time-consuming and tedious. Still, it is highly valuable since data is the foundation of LLMs and FMs. Optimization of models is also essential to reducing the requirement for costly GPUs. Moreover, as models become increasingly commoditized, AI is attracting worldwide attention.

All in all, LLMs in Production was a fantastic conference with excellent speakers. It was a pleasure to learn from Diego Oppenheimer, Hanlin Tang, Shreya Rajpal, Tanmay Chopra, Alex Ratner, Lina Weichbrodt, and more. With more and more industries venturing into AI, coaxed by the hype of LLMs, UbiOps stands ready to help organizations build and manage scalable AI solutions, to drive better business outcomes and a better world. When you’re ready to deploy your next LLM or FM, feel free to reach out to us!

 

If you have any questions about what else is possible with UbiOps, or you’re interested in deploying your model to UbiOps, do not hesitate to reach us and book a call with our Product Specialist!

Latest news

Turn your AI & ML models into powerful services with UbiOps