Introducing Direct Preference Optimization (DPO) Support on OpenPipe

Introducing Direct Preference Optimization (DPO) Support on OpenPipe

Fine-tuning for production apps

Train higher-quality, faster models that continuously improve.

90%

Fewer errors in production

5min

To start collecting training data

8x

Cheaper than GPT-4o

Trusted by engineers at:

  • test

Trusted by engineers at:

  • test

Higher quality, lower costs

Fine-tuned Llama 3.1 models consistently outperform GPT-4o, at a fraction of the cost.

Shorten your deployment loop. Save time and money.

Keep your datasets, models and evaluations in one place.

Capture Data

Automatically record LLM requests and responses.

Train Models

Two clicks to train a state-of-the-art model on your data.

Automate Deployment

We serve your model on our managed endpoints that scale to millions of requests.

Evaluate & Compare

Use LLM-as-judge evals to quickly gauge performance.

Intuitive data and model management

Collect logs, fine-tune, evaluate. Simple.

Collect Data

Fine Tune

Evaluate

Monitor

What our users say

Openpipe increased our inference speed by 3x compared to GPT4-turbo while reducing cost by >10x. It’s a no-brainer for any company that uses LLMs in prod.

David Paffenholz

CEO & Co-founder • Juicebox

We used OpenPipe to process a huge dataset we needed classified. GPT-4 would have cost us $60K, but with OpenPipe it was only a few hundred dollars. The process was super simple and results were great. The OpenPipe team is the real deal and really knows their stuff.

Sahil Chopra

Co-founder • Linum

InVintory processes millions of wine labels every month, and GPT-4 was prohibitively expensive to continue using. OpenPipe allowed us to train a model that is just as accurate at 1/8th the cost, I’d highly recommend them for fine-tuning task specific models!

Sam Finan

CTO • Invintory

OpenPipe has been huge for us! They’ve made it easy and cheap to deploy fine tunes and rapidly iterate on them. We’ve deployed ~10 fine tunes on OpenPipe in the last few months and have been able to ship some big improvements to our quest + inventory features because of them. Their support has also been amazing!

Will Liu

Friends & Fables

For us, the biggest benefit was lowering time to production. OpenPipe lets us focus on our IP and use the platform to train, review and deploy models in a few clicks with confidence.

Alex Rodriguez

Tech Lead • Axis

We’re using OpenPipe to train our custom voice bots. Our fine-tuned models are much lower latency than OpenAI’s, so we’re able to provide a much better user experience for our customers

Pablo Palafox

Co-founder • Happy Robot

Flexible Plans

For more details visit our pricing page.

Developer

Per-Token

Designed for quick onboarding and high quality with minimal effort

Get Started with $100 Free Credits

Start with $100 Free Credits

Plan includes:

Autoscaling

Metrics & Analytics

50k training rows per dataset

Up to 50 fine-tuned models

From $4 per 1M tokens for training and $0.40 and $0.45 input and output.

Business

Enterprise

Perfect for larger scale use by service oriented companies and businesses

Contact Us

Everything in Per-Token plus:

HIPAA, SOC 2, GDPR compliance

Custom relabeling techniques

Active Learning

500k training rows per dataset

Discounted token rates

Unlimited fine-tuned models

Pricing Page

For custom plan inquiries contact us at hello@openpipe.ai

For custom plan inquiries contact us at hello@openpipe.ai

Scale with security

Move quickly and confidently.

SOC 2 Type 2

Know your data is safe in OpenPipe.

HIPAA

Keep patient info secure.

GDPR

Stay up to date with the latest regulations.

Start collecting on OpenPipe today

Fine-tune the right way

Get Started

Easy Integration

Simply update your SDK import statement and add an OpenPipe API key. Replace prompts with models in minutes, not weeks.

Own Your Weights

Own your own weights when you fine-tune open-source models and deploy anywhere you need.

From our blog

From our blog

Read our latest posts to find out more about what we've been working on lately!

Read our latest posts to find out more about what we've been working on lately!

Visit Our Blog

News

News

Oct 28th, 2024

RL and $4.80 GPU Time vs 5M HN Posts (RLHF Part 1)

10 minutes

We used reinforcement learning and $4.80 of GPU time to find the best HN post ever.

News

News

Oct 1st, 2024

Introducing DPO Support on OpenPipe

5 minutes

We're thrilled to announce that OpenPipe is the first fine-tuning platform to support Direct Preference Optimization (DPO)!

News

News

Mar 25th, 2024

We Raised $6.7M to Replace GPT-4 with Custom Models

3 minutes

Today I'm excited to announce the close of our $6.7M seed round.

About OpenPipe

OpenPipe is the easiest way to train and deploy your own fine-tuned models. It only takes a few minutes to get started and can save you 25x relative to OpenAI with higher quality.

About OpenPipe

OpenPipe is the easiest way to train and deploy your own fine-tuned models. It only takes a few minutes to get started and can save you 25x relative to OpenAI with higher quality.