Shared client for efficient API calls with persistent connections
Exponential backoff for rate limits and errors with configurable attempts
Concurrent API calls with semaphore control and order preservation
Clean notebook output and error handling for interactive development
Process pandas DataFrames with built-in message column handling
Simple setup that works with OpenAI and Azure OpenAI out of the box
pip install wurun
pip install wurun[dataframe]
pip install wurun[dev]
from wurun import Wurun
# Setup once per kernel
await Wurun.setup(
endpoint="https://api.openai.com/v1",
api_key="your-api-key",
deployment_name="gpt-3.5-turbo"
)
# Single question
messages = [{"role": "user", "content": "Explain asyncio"}]
answer = await Wurun.ask(messages)
print(answer)
# Cleanup
await Wurun.close()
# Batch processing
questions = [
[{"role": "user", "content": "What is Python?"}],
[{"role": "user", "content": "What is JavaScript?"}]
]
answers = await Wurun.run_gather(questions, concurrency=2)
# DataFrame processing
import pandas as pd
df = pd.DataFrame({'messages': questions})
answers = await Wurun.run_dataframe(df, 'messages')
df['answers'] = answers
Wurun.setup() - Initialize clientWurun.close() - Clean up resourcesWurun.ask() - Single API call with retryreturn_meta=True - Include latency and retry countWurun.run_gather() - Preserve input orderWurun.run_as_completed() - Process as results finishWurun.run_dataframe() - Process DataFrame columnsWurun.print_qna_ordered() - Pretty print Q&AWurun.print_as_ready() - Print as completed