Today, we are excited to announce a strategic partnership with OpenAI, providing GPT-3.5 fine-tuning for the world’s leading enterprises. At Scale, we believe that fine-tuning is the key to unlocking the performance of LLMs for any organization's most critical use cases - and we have seen both great excitement and demand for this fine-tuning capability from the market. Our partnership brings together OpenAI's advanced base model GPT-3.5 with Scale's fine-tuning expertise and industry-leading Data Engine to help every company create custom state-of-the-art models for their specific business needs. Scale already performs fine-tuning for many commercial and open-source models. As OpenAI’s preferred fine-tuning partner for GPT-3.5, we are excited to leverage their powerful APIs to help even more enterprises build the most powerful custom LLMs that increase efficiency while reducing costs.
Fine-tuning GPT-3.5 has already yielded performance improvements for companies like Brex. As a preferred partner to OpenAI, we were granted early access to GPT-3.5’s fine-tuning APIs. By leveraging these APIs in combination with Scale’s enterprise Custom LLM platform and high-quality data from Scale’s Data Engine we delivered initial performance improvements for Brex.
"To get the most value out of our models, companies are looking to use their data to create tailored services and solutions,” said Brad Lightcap, COO, OpenAI. “Scale extends our ability to bring the power of fine-tuning to more companies, building on their enterprise AI experience to help businesses better apply OpenAI models for their unique needs.”
Brex has been leveraging LLMs to generate high-quality expense memos that help ease the burden of compliance requirements for employees. Automating the generation of expense memos helps save significant amounts of employee time within the expense workflow and enables companies to close their books faster. The Brex team had previously been using GPT-4 for memo generation, but wanted to explore if they could improve cost and latency, while maintaining quality, by using a fine-tuned GPT-3.5 model.
By using the GPT-3.5 fine-tuning API on Brex data annotated with Scale’s Data Engine, we saw that the fine-tuned GPT-3.5 model outperformed the stock GPT-3.5 turbo model 66% of the time.
"At Brex, we’ve been investing heavily in AI which is transforming how we offer financial services to customers. Our ongoing partnership with OpenAI and Scale AI position us at the cutting edge by employing state-of-the-art techniques to enhance employee compliance and help finance teams close the books faster. In particular, fine-tuning GPT-3.5 has been a game changer for us, enabling us to deliver high-quality AI experiences, comparable to GPT-4, with much lower cost and lower latency. This unlocks a whole new set of capabilities for us that were previously not viable." - Henrique Dubugras, CEO, Brex
How to Get Started
Our pilot project with Brex demonstrates the importance of fine-tuning LLMs to deliver outcomes.
“We are excited to partner with OpenAI to supercharge model performance - helping every enterprise utilize AI most effectively for their unique needs. Prompting alone—atop even the best LLMs like GPT-3.5 — is not enough model customization to produce the most accurate, efficient results. As with software, an incredible amount of value comes from fine-grained optimizations, and fine tuning is critical for that.” - Alexandr Wang, Founder and CEO, Scale AI
Scale starts by preparing and enhancing your data with the Scale Data Engine. Then, we fine-tune GPT-3.5 with your data and further customize models with plugins and retrieval augmented generation, or the ability to reference and cite your proprietary documents in its responses. We then leverage our Test and Evaluation platform and trained domain experts to ensure that the model exceeds your performance expectations and safety requirements.
Learn more about how Scale can help you customize LLMs for your unique use case.