OpenPipe: fine-tuning for developers

OpenPipe: fine-tuning for developers

Replace your prompts with faster, cheaper fine-tuned models.

Replace your prompts with faster, cheaper fine-tuned models.

Capture existing LLM output

Turn LLM output into training data

Use the openpipe sdk to automatically capture your LLM requests and responses.

Use the openpipe sdk to automatically log your existing LLM requests and responses.

Train your own fine-tuned models

Create your own fine-tuned models

Build datasets from your request logs and fine-tune a model that fits your needs exactly.

Integrate in 5 minutes

Integrate in 5 minutes

// import OpenAI from 'openai'
import OpenAI from "openpipe/openai";

const openai = new OpenAI({
  apiKey: "my api key",
  openpipe: {
    apiKey: "my api key",
  },
});

Install the SDK

Simply replace your Python or Javascript OpenAI SDK and add an OpenPipe API key.

openai.chat.completions.create(
  {
    ...request,
    openpipe: { // optional
      tags: {
        prompt_id: "classifyEmail",
      },
    },
  },
);

Track prompt versions

You can tag your prompts to make it easier when generating the dataset later.

Save time.

Save time.

Replace prompts with models in minutes, not weeks.

Replace prompts with models in minutes, not weeks.

And money.

And money.

8x cheaper than GPT-4 Turbo. Prune tokens to save even more.

8x cheaper than GPT-4 Turbo. Prune tokens to save even more.

5x

With higher accuracy.

With higher accuracy.

On average, customers see 80% fewer errors compared to GPT-3.5.

On average, customers see 80% fewer errors compared to GPT-3.5.

The future is open.

The future is open.

Fine-tune open-source models, and download the weights at any time.

Start collecting now

Start collecting now

Integrate the OpenPipe SDK and prepare for cheaper inference.