Wurun

Async OpenAI API wrapper optimized for Jupyter notebooks

Features

HTTP/2 Connection Pooling

Shared client for efficient API calls with persistent connections

Robust Retry Logic

Exponential backoff for rate limits and errors with configurable attempts

Batch Processing

Concurrent API calls with semaphore control and order preservation

Jupyter Optimized

Clean notebook output and error handling for interactive development

DataFrame Support

Process pandas DataFrames with built-in message column handling

Zero Configuration

Simple setup that works with OpenAI and Azure OpenAI out of the box

Installation

Production Use

pip install wurun

With DataFrame Support

pip install wurun[dataframe]

Development

pip install wurun[dev]

Quick Start

Basic Usage

from wurun import Wurun

# Setup once per kernel
await Wurun.setup(
    endpoint="https://api.openai.com/v1",
    api_key="your-api-key",
    deployment_name="gpt-3.5-turbo"
)

# Single question
messages = [{"role": "user", "content": "Explain asyncio"}]
answer = await Wurun.ask(messages)
print(answer)

# Cleanup
await Wurun.close()

Batch Processing

# Batch processing
questions = [
    [{"role": "user", "content": "What is Python?"}],
    [{"role": "user", "content": "What is JavaScript?"}]
]
answers = await Wurun.run_gather(questions, concurrency=2)

# DataFrame processing
import pandas as pd
df = pd.DataFrame({'messages': questions})
answers = await Wurun.run_dataframe(df, 'messages')
df['answers'] = answers

API Reference

Setup & Teardown

  • Wurun.setup() - Initialize client
  • Wurun.close() - Clean up resources

Single Calls

  • Wurun.ask() - Single API call with retry
  • return_meta=True - Include latency and retry count

Batch Processing

  • Wurun.run_gather() - Preserve input order
  • Wurun.run_as_completed() - Process as results finish
  • Wurun.run_dataframe() - Process DataFrame columns

Notebook Helpers

  • Wurun.print_qna_ordered() - Pretty print Q&A
  • Wurun.print_as_ready() - Print as completed