OpenAI
batchling is compatible with OpenAI through any supported framework
The following endpoints are made batch-compatible by OpenAI:
/v1/responses/v1/chat/completions/v1/embeddings/v1/completions/v1/moderations/v1/images/generations/v1/images/edits
Check model support and batch pricing
Before sending batches, review the provider's official pricing page for supported models and batch pricing details.
The Batch API docs for OpenAI can be found on the following URL:
https://platform.openai.com/docs/guides/batch
Example Usage
API key required
Set OPENAI_API_KEY in .env or ensure it is already loaded in your environment variables before running batches.
Here's an example showing how to use batchling with OpenAI:
openai_example.py
import asyncio
import os
from dotenv import load_dotenv
from openai import AsyncOpenAI
from batchling import batchify
load_dotenv()
async def build_tasks() -> list:
"""Build OpenAI requests."""
client = AsyncOpenAI(api_key=os.getenv(key="OPENAI_API_KEY"))
questions = [
"Who is the best French painter? Answer in one short sentence.",
"What is the capital of France?",
]
return [client.responses.create(input=question, model="gpt-4o-mini") for question in questions]
async def main() -> None:
"""Run the OpenAI example."""
tasks = await build_tasks()
responses = await asyncio.gather(*tasks)
for response in responses:
content = response.output[-1].content # skip reasoning output, get straight to the answer
print(f"{response.model} answer:\n{content[0].text}\n")
async def run_with_batchify() -> None:
"""Run `main` inside `batchify` for direct script execution."""
async with batchify():
await main()
if __name__ == "__main__":
asyncio.run(run_with_batchify())
Output: