
How to Access GPT-5 and GPT-5.2 via API - Complete Developer Guide
OpenAI has released its most powerful models yet: GPT-5, GPT-5.2, and the reasoning-focused o3-pro. This guide shows you how to access these cutting-edge models through Crazyrouter's unified API.
Supported OpenAI Models#
Crazyrouter provides access to the complete OpenAI model lineup:
| Model | Input ($/1M tokens) | Output ($/1M tokens) | Best For |
|---|---|---|---|
| gpt-5.2 | $1.75 | $14.00 | Latest flagship, complex tasks |
| gpt-5.2-pro | $3.50 | $28.00 | Enhanced reasoning |
| gpt-5 | $1.25 | $10.00 | General tasks |
| gpt-5-pro | $2.50 | $20.00 | Advanced analysis |
| gpt-5-mini | $0.25 | $2.00 | Cost-effective |
| gpt-5-nano | $0.05 | $0.40 | High-volume tasks |
| o3-pro | $20.00 | $80.00 | Complex reasoning |
| o3-mini | $1.10 | $4.40 | Efficient reasoning |
| o4-mini | $1.10 | $4.40 | Latest reasoning model |
Quick Start#
1. Get Your API Key#
- Visit Crazyrouter Console
- Navigate to "Token Management"
- Click "Create Token"
- Copy your API key (starts with
sk-)
2. Make Your First Request#
Using Python (Recommended)#
from openai import OpenAI
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://crazyrouter.com/v1",
default_headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
}
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
],
temperature=0.7,
max_tokens=1000
)
print(response.choices[0].message.content)
Using Node.js#
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'sk-your-api-key',
baseURL: 'https://crazyrouter.com/v1',
defaultHeaders: {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
}
});
async function main() {
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
],
temperature: 0.7
});
console.log(response.choices[0].message.content);
}
main();
Using curl#
curl https://crazyrouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-your-api-key" \
-H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "Hello, GPT-5.2!"}],
"temperature": 0.7
}'
Streaming Responses#
For real-time output, enable streaming:
from openai import OpenAI
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://crazyrouter.com/v1",
default_headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
}
)
stream = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Write a short story about AI."}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
Using Reasoning Models (o3-pro)#
The o3-pro model excels at complex reasoning tasks:
response = client.chat.completions.create(
model="o3-pro",
messages=[
{"role": "user", "content": "Solve this step by step: If a train travels 120 miles in 2 hours, then stops for 30 minutes, then travels another 90 miles in 1.5 hours, what is the average speed for the entire journey including the stop?"}
]
)
print(response.choices[0].message.content)
GPT-5 Codex Models#
For code generation tasks, use the specialized codex models:
response = client.chat.completions.create(
model="gpt-5-codex",
messages=[
{"role": "user", "content": "Write a Python function to implement binary search"}
]
)
Available codex variants: gpt-5-codex, gpt-5-codex-high, gpt-5-codex-medium, gpt-5-codex-low, gpt-5.2-codex
Best Practices#
- Choose the right model: Use gpt-5-nano for simple tasks, gpt-5.2 for complex ones
- Set appropriate temperature: Lower (0.1-0.3) for factual tasks, higher (0.7-1.0) for creative tasks
- Use streaming: For better user experience in chat applications
- Handle errors gracefully: Implement retry logic for rate limits
Next Steps#
- View Model Pricing for detailed costs
- Read the API Documentation for advanced features
- Join our Discord Community for support
For questions, contact support@crazyrouter.com


