OpenAI Harmony
OpenAI Harmony
OpenAI's response format for its open-weight model series gpt-oss
Try gpt-oss | Learn more | Model card
The gpt-oss models were trained on the harmony response format for defining conversation structures, generating reasoning output and structuring function calls. If you are not using gpt-oss directly but through an API or a provider like Ollama, you will not have to be concerned about this as your inference solution will handle the formatting. If you are building your own inference solution, this guide will walk you through the prompt format. The format is designed to mimic the OpenAI Responses API, so if you have used that API before, this format should hopefully feel familiar to you. gpt-oss should not be used without using the harmony format as it will not work correctly.
The format enables the model to output to multiple different channels for chain of thought, and tool calling premables along with regular responses. It also enables specifying various tool namespaces, and structured outputs along with a clear instruction hierarchy. Check out the guide to learn more about the format itself.
<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-06-28
Reasoning: high
# Valid channels: analysis, commentary, final. Channel must be included for every message.
Calls to these tools must go to the commentary channel: 'functions'.<|end|><|start|>developer<|message|># Instructions
Always respond in riddles
# Tools
## functions
namespace functions {
// Gets the location of the user.
type get_location = () => any;
// Gets the current weather in the provided location.
type get_current_weather = (_: {
// The city and state, e.g. San Francisco, CA
location: string,
format?: "celsius" | "fahrenheit", // default: celsius
}) => any;
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant
We recommend using this library when working with models that use the harmony response format
- Consistent formatting – shared implementation for rendering and parsing keeps token-sequences loss-free.
- Blazing fast – heavy lifting happens in Rust.
- First-class Python support – install with
pip
, typed stubs included, 100 % test parity with the Rust suite.
Using Harmony
Python
Check out the full documentation
Installation
Install the package from PyPI by running
pip install openai-harmony
# or if you are using uv
uv pip install openai-harmony
Example
from openai_harmony import ( load_harmony_encoding, HarmonyEncodingName, Role, Message, Conversation, DeveloperContent, SystemContent, ) enc = load_harmony_encoding(HarmonyEncodingName.HARMONY_GPT_OSS) convo = Conversation.from_messages([ Message.from_role_and_content( Role.SYSTEM, SystemContent.new(), ), Message.from_role_and_content( Role.DEVELOPER, DeveloperContent.new().with_instructions("Talk like a pirate!") ) Message.from_role_and_content(Role.USER, "Arrr, how be you?"), ]) tokens = enc.render_conversation_for_completion(convo, Role.ASSISTANT) print(tokens) # Later, after the model responded … parsed = enc.parse_messages_from_completion_tokens(tokens, role=Role.ASSISTANT) print(parsed)
Rust
Check out the full documentation
Installation
Add the dependency to your Cargo.toml
[dependencies] openai-harmony = { git = "https://github.com/openai/harmony" }
Example
use openai_harmony::chat::{Message, Role, Conversation}; use openai_harmony::{HarmonyEncodingName, load_harmony_encoding}; fn main() -> anyhow::Result<()> { let enc = load_harmony_encoding(HarmonyEncodingName::HarmonyGptOss)?; let convo = Conversation::from_messages([ Message::from_role_and_content(Role::User, "Hello there!"), ]); let tokens = enc.render_conversation_for_completion(&convo, Role::Assistant)?; println!("{:?}", tokens); Ok(()) }
Contributing
The majority of the rendering and parsing is built in Rust for performance and exposed to Python
through thin pyo3
bindings.
┌──────────────────┐ ┌───────────────────────────┐
│ Python code │ │ Rust core (this repo) │
│ (dataclasses, │────► │ • chat / encoding logic │
│ convenience) │ │ • tokeniser (tiktoken) │
└──────────────────┘ FFI └───────────────────────────┘
Repository layout
.
├── src/ # Rust crate
│ ├── chat.rs # High-level data-structures (Role, Message, …)
│ ├── encoding.rs # Rendering & parsing implementation
│ ├── registry.rs # Built-in encodings
│ ├── tests.rs # Canonical Rust test-suite
│ └── py_module.rs # PyO3 bindings ⇒ compiled as openai_harmony.*.so
│
├── harmony/ # Pure-Python wrapper around the binding
│ └── __init__.py # Dataclasses + helper API mirroring chat.rs
│
├── tests/ # Python test-suite (1-to-1 port of tests.rs)
├── Cargo.toml # Rust package manifest
├── pyproject.toml # Python build configuration for maturin
└── README.md # You are here 🖖
Developing locally
Prerequisites
- Rust tool-chain (stable) – https://rustup.rs
- Python ≥ 3.8 + virtualenv/venv
maturin
– build tool for PyO3 projects
1. Clone & bootstrap
git clone https://github.com/openai/harmony.git cd harmony # Create & activate a virtualenv python -m venv .venv source .venv/bin/activate # Install maturin and test dependencies pip install maturin pytest mypy ruff # tailor to your workflow # Compile the Rust crate *and* install the Python package in editable mode maturin develop -F python-binding --release
maturin develop -F python-binding
builds harmony with Cargo, produces a native extension
(openai_harmony.
) and places it in your virtualenv next to the pure-
Python wrapper – similar to pip install -e .
for pure Python projects.
2. Running the test-suites
Rust:
cargo test # runs src/tests.rs
Python:
pytest # executes tests/ (mirrors the Rust suite)
Run both in one go to ensure parity:
pytest && cargo test
3. Type-checking & formatting (optional)
mypy harmony # static type analysis ruff check . # linting cargo fmt --all # Rust formatter
What's Your Reaction?






