LATEST POST
OpenPipe Mixture of Agents: Outperform GPT-4 at 1/25th the Cost
We’re excited to announce a new family of “Mixture of Agents” models optimized for generating synthetic training data.
Read More
10 minute read
All Posts
Technical
10 minutes
Aug 28, 2024
Fine-tuning Best Practices Chapter 2: Models
Chapter 2 continuing OpenPipe's fine-tuning best practices series. This chapter focuses specifically on models and the trade-off choices around them to ultimately create the highest performing fine-tuned LLM.
Read More
Technical
10 minutes
Aug 1, 2024
Fine-tuning Best Practices Series Introduction and Chapter 1: Training Data
A post introducing a series of articles on LLM fine-tuning best practices, and the first chapter specifically on Training Data best practices.
Read More
News
4 minutes
Jul 24, 2024
Announcing Llama 3.1 and GPT-4o Mini fine-tuning through OpenPipe!
Read More
News
10 minutes
Jun 20, 2024
OpenPipe Mixture of Agents: Outperform GPT-4 at 1/25th the Cost
We’re excited to announce a new family of “Mixture of Agents” models optimized for generating synthetic training data.
Read More
Technical
5 minutes
May 23, 2024
The Ten Commandments of Fine-Tuning in Prod (a Mastering LLMs Conference Talk)
Here’s the summary and full video of a talk given at Mastering LLMs Conference titled the “Ten Commandments of Fine-Tuning in Prod.”
Read More
News
4 minutes
Apr 21, 2024
What we've learned in 3 days of Llama 3
You’ve probably heard about Llama 3, a new open-source LLM. There are 3 variants announced so far: a small 8B parameter model, medium 70B parameter model, and very large 405B parameter model (still in training). I wanted to share the most important things we’ve learned about these models so far.
Read More
Basics
5 minutes
Mar 28, 2024
Fine-Tuning in a Nutshell
Hi, I’m David, the CTO of OpenPipe. This is a post for individuals who want to build high-level intuition around fine-tuning.
Read More
News
3 minutes
Mar 25, 2024
We Raised $6.7M to Replace GPT-4 with Your Own Fine-Tuned Models
Today I’m excited to announce the close of our $6.7M seed round.
Read More
Comparison
4 minutes
Feb 29, 2024
Mixtral Curious? Comparing Mistral 7B and Mixtral for fine-tuning
Evals can be tricky, but we’ve found that LLM-as-judge is the most reliable way to compare two outputs.
Read More
Technical
5 minutes
Jan 17, 2024
S-LoRA: Serving Thousands of Models From One GPU for Fun and Profit
S-LoRA describes a set of optimizations for running thousands of separate LLMs simultaneously on the same GPU.
Read More
Technical
3 minutes
Jan 4, 2024
Axis Improves Generation Quality and Lowers Costs With Fine Tuning
Axis is a modern platform to help multinational enterprises understand their local markets across the world.
Read More
News
3 minutes
Jan 3, 2024
Product Updates December 2023
Hi there! We'd love to share with you some of the recent updates we've made in December
Read More
Comparison
7 minutes
Dec 18, 2023
How we built “Mistral 7B Fine-Tune Optimized,” the best 7B model for fine-tuning
Fine-tunes based on our new model are slightly stronger than GPT-4, as measured by GPT-4 itself.
Read More
News
3 minutes
Dec 1, 2023
Announcing Automatic Evals for Fine-tuned Models
At OpenPipe we’re building a fully-managed fine-tuning platform for developers.
Read More
Technical
8 minutes
Nov 8, 2023
Is AI the Next Crypto? Insights from 2M HN comments
Both crypto and AI have been heavily debated on Hacker News, with discussions going back years.
Read More
Technical
2 minutes
Nov 5, 2023
Llama 2 vs Mistral: Believe the Hype
If you’re fine-tuning a task-specific LLM under 34B parameters you should probably be using Mistral.
Read More
Technical
2 minutes
Sep 12, 2023
Fine-tune your own Llama 2 to replace GPT-3.5/4
I've been playing around with fine-tuning models for a couple of years, and wanted to share some insights and practical code
Read More
Comparison
4 minutes
Aug 28, 2023
From Prompts to Models
OpenPipe lets you capture your existing prompts and completions, and then use them to fine-tune a model specific to your use-case.
Read More