Technical

Axis Improves Generation Quality and Lowers Costs With Fine Tuning

Kyle Corbitt

Jan 4, 2024

3 minutes

Axis is a modern platform to help multinational enterprises understand their local markets across the world. Axis provides the most reliable way to stay informed on the ever-changing regulations, norms and current events that might affect a business with a global footprint.

A few months ago they began developing a new product offering with a simple but ambitious goal: to build an index of every national, provincial and local regulation world-wide, and then make that index available to their customers as an easily searchable and understandable database.

Like many companies building with GenAI at scale, they faced a tough choice when the time came to roll out the new service widely. GPT-4 was too expensive (even with the recently-released Turbo), and GPT-3.5 couldn’t perform the task to the required standard. “We needed to process 1m+ events per month and 10k+ regulations per country— the only way to do this accurately in a cost effective way was to fine tune our models,” writes Mishaal Al Gergawi, Axis cofounder and CEO.

Axis approached us for a pilot and immediately began gathering data by running their existing prompts through our SDK. They quickly began creating fine-tuned models as well, starting with datasets as small as a few hundred examples. OpenPipe handled the dataset management, training and deployments with an OpenAI-compatible API, giving Axis the freedom to iterate quickly and ultimately deploy stronger models. “For us, the biggest benefit was lowering time to production,” explains Alex Vidal Rodriguez, who led the implementation on Axis’s side. “OpenPipe lets us focus on our IP and use the platform to train, review and deploy models in a few clicks with confidence.”

Maintaining Quality

Axis’ task is complex, and quality is non-negotiable. “Our customers rely on Axis to continuously answer the questions: which laws do I need to comply with and which officials I need approvals from,” Michaal explains.

Custom evaluations were key to quickly iterating on the model while optimizing for generation quality. Axis was an invaluable design partner with us in developing our automatic eval functionality. Automatic evals use GPT-4 to compare model completions head-to-head and quickly get a sense for which model performs better.

“OpenPipe’s automatic evals let us choose a sample size quickly and get an idea if something is working. It also let us test new models quickly. OpenPipe released a new optimized Mistral model and within 2 hours we had it trained on our data and could measure the improvement on our evals. It also saved us human effort.” Alex explains.

Saving Money

By using a custom Mistral 7B variant trained and deployed through OpenPipe, Axis was able to save over 95% on a per-token basis compared to GPT-4-Turbo, the next cheapest model that met their high quality bar.

In fact, the savings went even further—Axis was also one of the first customers to adopt our unique “Pruning Rules” dataset optimization feature. Pruning rules allowed Axis to remove repetitive parts of their prompt like system instructions at training and inference time. Since the models end up learning these instructions by example as part of the training process anyway, Axis was able to remove them with minimal quality degradation in the final product. As a result, they cut their input token usage by over 50% on most completions, leading to lower latency and even greater savings. “Input pruning was something I had read about but wouldn’t know how to do on my own,” writes Alex.

Building A Defensible Product

At this point, Axis has deployed multiple fine-tuned models in production and is using them to power all customer traffic. Their models are able to process events and regulations in many languages effectively, and actually in some cases outperform GPT-4, despite being a fraction of the cost. As their user growth continues, these models and the feedback they get on their generations will also help them build a data flywheel that will cement their position as the market leader. “OpenPipe proved to be a great partner for us,” Mishaal writes. We feel the same way!

OpenPipe MoA: Outperform GPT-4 at 1/25th the Cost

OpenPipe MoA: Outperform GPT-4 at 1/25th the Cost

About OpenPipe

OpenPipe is the easiest way to train and deploy your own fine-tuned models. It only takes a few minutes to get started and can save you 25x relative to OpenAI with higher quality.

About OpenPipe

OpenPipe is the easiest way to train and deploy your own fine-tuned models. It only takes a few minutes to get started and can save you 25x relative to OpenAI with higher quality.