Fine-tuning for developers

Fine-tuning for developers

Replace your prompts with faster, cheaper fine-tuned models.

Replace your prompts with faster, cheaper fine-tuned models.

Shorten your deployment loop

Keep your datasets, models, and evaluations all in one place. Train new models with the click of a button.

Capture Data

Automatically record LLM requests and responses.

Capture Data

Automatically record LLM requests and responses.

Train Models

Create datasets from your captured data. Train multiple base models on the same dataset.

Train Models

Create datasets from your captured data. Train multiple base models on the same dataset.

import OpenAI from "openpipe/openai";

const client = OpenAI();

const completion = await client.chat.completions.create({
  // model: "gpt-3.5-turbo", - original model
  model: "openpipe:your-fine-tuned-model-id",
  messages: [{
      role: "user",
      content: "Count to 10",
  }],
});

Automate Deployment

We serve your model on our managed endpoints that scale to millions of requests.

import OpenAI from "openpipe/openai";

const client = OpenAI();

const completion = await client.chat.completions.create({
  // model: "gpt-3.5-turbo", - original model
  model: "openpipe:your-fine-tuned-model-id",
  messages: [{
      role: "user",
      content: "Count to 10",
  }],
});

Automate Deployment

We serve your model on our managed endpoints that scale to millions of requests.

Evaluate & Compare

Write evaluations and compare model outputs side by side.

Evaluate & Compare

Write evaluations and compare model outputs side by side.

Integrate in 5 minutes

Integrate in 5 minutes

Change a couple lines of code, and you're good to go.

Change a couple lines of code, and you're good to go.

// import OpenAI from 'openai'
import OpenAI from "openpipe/openai";

const openai = new OpenAI({
  apiKey: "my api key",
  openpipe: {
    apiKey: "my api key",
  },
});
// import OpenAI from 'openai'
import OpenAI from "openpipe/openai";

const openai = new OpenAI({
  apiKey: "my api key",
  openpipe: {
    apiKey: "my api key",
  },
});

Install the SDK

Simply replace your Python or Javascript OpenAI SDK and add an OpenPipe API key.

openai.chat.completions.create(
  {
    ...request,
    openpipe: { // optional
      tags: {
        prompt_id: "classifyEmail",
      },
    },
  },
);
openai.chat.completions.create(
  {
    ...request,
    openpipe: { // optional
      tags: {
        prompt_id: "classifyEmail",
      },
    },
  },
);

Track prompt versions

Make your data searchable with custom tags.

What our users say

What our users say

Openpipe increased our inference speed by 3x compared to GPT4-turbo while reducing cost by >10x. It’s a no-brainer for any company that uses LLMs in prod.

Openpipe increased our inference speed by 3x compared to GPT4-turbo while reducing cost by >10x. It’s a no-brainer for any company that uses LLMs in prod.

David Paffenholz

CEO & Co-founder • Juicebox

We used OpenPipe to process a huge dataset we needed classified. GPT-4 would have cost us $60K, but with OpenPipe it was only a few hundred dollars. The process was super simple and results were great. The OpenPipe team is the real deal and really knows their stuff.

We used OpenPipe to process a huge dataset we needed classified. GPT-4 would have cost us $60K, but with OpenPipe it was only a few hundred dollars. The process was super simple and results were great. The OpenPipe team is the real deal and really knows their stuff.

Sahil Chopra

Co-founder • Linum

InVintory processes millions of wine labels every month, and GPT-4 was prohibitively expensive to continue using. OpenPipe allowed us to train a model that is just as accurate at 1/8th the cost, I’d highly recommend them for fine-tuning task specific models!

InVintory processes millions of wine labels every month, and GPT-4 was prohibitively expensive to continue using. OpenPipe allowed us to train a model that is just as accurate at 1/8th the cost, I’d highly recommend them for fine-tuning task specific models!

Sam Finan

CTO • Invintory

OpenPipe has been huge for us! They’ve made it easy and cheap to deploy fine tunes and rapidly iterate on them. We’ve deployed ~10 fine tunes on OpenPipe in the last few months and have been able to ship some big improvements to our quest + inventory features because of them. Their support has also been amazing!

OpenPipe has been huge for us! They’ve made it easy and cheap to deploy fine tunes and rapidly iterate on them. We’ve deployed ~10 fine tunes on OpenPipe in the last few months and have been able to ship some big improvements to our quest + inventory features because of them. Their support has also been amazing!

Will Liu

Friends & Fables

For us, the biggest benefit was lowering time to production. OpenPipe lets us focus on our IP and use the platform to train, review and deploy models in a few clicks with confidence.

For us, the biggest benefit was lowering time to production. OpenPipe lets us focus on our IP and use the platform to train, review and deploy models in a few clicks with confidence.

Alex Rodriguez

Tech Lead • Axis

We’re using OpenPipe to train our custom voice bots. Our fine-tuned models are much lower latency than OpenAI’s, so we’re able to provide a much better user experience for our customers

We’re using OpenPipe to train our custom voice bots. Our fine-tuned models are much lower latency than OpenAI’s, so we’re able to provide a much better user experience for our customers

Pablo Palafox

Co-founder • Happy Robot

Start saving

Start saving

Small specialized models cost much less to run than large multipurpose LLMs.

Save time.

Replace prompts with models in minutes, not weeks.

Save time.

Replace prompts with models in minutes, not weeks.

And money.

8x cheaper than GPT-4 Turbo. Prune tokens to save even more.

And money.

8x cheaper than GPT-4 Turbo. Prune tokens to save even more.

$5M

You're in good company.

This year alone, we've saved our customers over $5M in inference costs.

$5M

You're in good company.

This year alone, we've saved our customers over $5M in inference costs.

Lower costs, not quality

Lower costs, not quality

Fine-tuned Mistral and Llama 2 models consistently outperform GPT-4-1106-Turbo, at a fraction of the cost.

The future is open.

The future is open.

Own your own weights when you fine-tune open-source models like Mistral and Llama 2, and download them at any time.

Start collecting

Start collecting

Install the OpenPipe SDK and fine-tune your first model.