Fine-tuning for developers

Fine-tuning for developers

Replace your prompts with faster, cheaper fine-tuned models.

Replace your prompts with faster, cheaper fine-tuned models.

Shorten your deployment loop

Keep your datasets, models, and evaluations all in one place. Train new models with the click of a button.

Capture Data

Automatically record LLM requests and responses.

Capture Data

Automatically record LLM requests and responses.

Train Models

Create datasets from your captured data. Train multiple base models on the same dataset.

Train Models

Create datasets from your captured data. Train multiple base models on the same dataset.

import OpenAI from "openpipe/openai";

const client = OpenAI();

const completion = await client.chat.completions.create({
  // model: "gpt-3.5-turbo", - original model
  model: "openpipe:your-fine-tuned-model-id",
  messages: [{
      role: "user",
      content: "Count to 10",
  }],
});

Automatic Deployment

We serve your model on our managed endpoints that scale to millions of requests.

import OpenAI from "openpipe/openai";

const client = OpenAI();

const completion = await client.chat.completions.create({
  // model: "gpt-3.5-turbo", - original model
  model: "openpipe:your-fine-tuned-model-id",
  messages: [{
      role: "user",
      content: "Count to 10",
  }],
});

Automatic Deployment

We serve your model on our managed endpoints that scale to millions of requests.

Evaluate & Compare

Write evaluations and compare model outputs side by side.

Evaluate & Compare

Write evaluations and compare model outputs side by side.

Integrate in 5 minutes

Integrate in 5 minutes

Change a couple lines of code, and you're good to go.

Change a couple lines of code, and you're good to go.

// import OpenAI from 'openai'
import OpenAI from "openpipe/openai";

const openai = new OpenAI({
  apiKey: "my api key",
  openpipe: {
    apiKey: "my api key",
  },
});
// import OpenAI from 'openai'
import OpenAI from "openpipe/openai";

const openai = new OpenAI({
  apiKey: "my api key",
  openpipe: {
    apiKey: "my api key",
  },
});

Install the SDK

Simply replace your Python or Javascript OpenAI SDK and add an OpenPipe API key.

openai.chat.completions.create(
  {
    ...request,
    openpipe: { // optional
      tags: {
        prompt_id: "classifyEmail",
      },
    },
  },
);
openai.chat.completions.create(
  {
    ...request,
    openpipe: { // optional
      tags: {
        prompt_id: "classifyEmail",
      },
    },
  },
);

Track prompt versions

Make your data searchable with custom tags.

Start saving

Start saving

Small specialized models cost much less to run than large multipurpose LLMs.

Save time.

Replace prompts with models in minutes, not weeks.

Save time.

Replace prompts with models in minutes, not weeks.

And money.

8x cheaper than GPT-4 Turbo. Prune tokens to save even more.

And money.

8x cheaper than GPT-4 Turbo. Prune tokens to save even more.

$3M

You're in good company.

Since September, we've saved our customers over $3M in inference costs.

$3M

You're in good company.

Since September, we've saved our customers over $3M in inference costs.

Lower costs, not quality

Lower costs, not quality

Fine-tuned Mistral and Llama 2 models consistently outperform GPT-4-1106-Turbo, at a fraction of the cost.

The future is open.

The future is open.

We're open-source, and so are many of the base models we use. Own your own weights when you fine-tune Mistral and Llama 2, and download them at any time.

Start collecting

Start collecting

Install the OpenPipe SDK and fine-tune your first model.