News

Product Updates December 2023

Kyle Corbitt

Jan 3, 2024

3 minutes

Hi there! We'd love to share with you some of the recent updates we've made in December

Major Launches

Mistral 7B Fine-Tune Optimized

We recently released a new Mistral fine-tune that outperforms Mistral’s base 7B model, and in some cases even outperforms GPT-4 once it has been fine-tuned for a specific customer task! The launch was well-received on HN and Twitter, and we already have dozens of fine-tuned models deployed on top of the new release. Find out more in the announcement post.

Automated Evaluations

We also recently announced automated evaluations. These use the LLM-as-judge pattern to rank model outputs using GPT-4. Evaluations quickly give you an idea of whether your fine-tuned model is doing better or worse than GPT-4, and by how much. Read more in the announcement.

More Product Improvements

We’ve shipped many new features and bugfixes over the last month. Here are a few of the highlights:

  • Our Chat Completions endpoint is now fully OpenAI compatible, which means you can use your fine-tuned models in external tools like Langchain and Haystack simply by changing the base URL.

  • We now officially support uploading request logs directly via our OpenAPI API. This increases compatibility with non-OpenAI models and programming languages we don’t have SDKs for.

  • We refactored our dataset and training infrastructure to increase the maximum dataset size to 1M rows (previously 50K).

  • Pruning rules let you eliminate repetitive tokens from your prompts and completions, leading to faster and cheaper responses. We shipped major improvements to pruning rules, including the ability to choose which pruning rules to apply on a model-by-model basis to evaluate the quality and performance impact of each rule.

  • Our dataset imports now accept the same JSONL file format OpenAI uses for fine-tuning jobs, improving ecosystem compatibility.

  • We moved our hosting infrastructure to AWS, which has improved product reliability and helped us prepare for our SOC2 audit (currently in progress).

  • Our Python SDK is now compatible with Python 3.8 for those stuck in old Python environments.

  • We improved access controls, and added a new “view-only” role type on projects.

  • …and many more!

Hope you're as excited for 2024 as we are!

Introducing Direct Preference Optimization (DPO) Support on OpenPipe

Introducing Direct Preference Optimization (DPO) Support on OpenPipe

About OpenPipe

OpenPipe is the easiest way to train and deploy your own fine-tuned models. It only takes a few minutes to get started and can save you 25x relative to OpenAI with higher quality.

About OpenPipe

OpenPipe is the easiest way to train and deploy your own fine-tuned models. It only takes a few minutes to get started and can save you 25x relative to OpenAI with higher quality.