Login
Back to Blog
"DeepSeek R2 API Guide: How to Use the Next-Gen Reasoning Model"

"DeepSeek R2 API Guide: How to Use the Next-Gen Reasoning Model"

C
Crazyrouter Team
February 22, 2026
47 viewsEnglishTutorial
Share:

DeepSeek R2 is the latest reasoning model from DeepSeek, building on the success of DeepSeek R1 that took the AI world by storm. With enhanced chain-of-thought reasoning, improved math and coding capabilities, and competitive pricing, R2 has become a serious contender in the reasoning model space. This guide covers everything you need to know to start using DeepSeek R2 through the API.

What is DeepSeek R2?#

DeepSeek R2 is an advanced reasoning model developed by the Chinese AI lab DeepSeek. It uses extended chain-of-thought (CoT) reasoning to solve complex problems by "thinking through" each step before providing an answer.

Key characteristics of DeepSeek R2:

  • Extended reasoning: Generates detailed thinking chains before answering
  • Math excellence: Achieves state-of-the-art results on mathematical benchmarks
  • Code generation: Strong performance on coding tasks including competitive programming
  • Open weights: Available as open-source, allowing self-hosting
  • Cost-effective: Significantly cheaper than comparable models from OpenAI

DeepSeek R2 vs Other Reasoning Models#

FeatureDeepSeek R2OpenAI o3Claude Opus 4.5Gemini 3 Pro
Reasoning approachChain-of-thoughtInternal reasoningExtended thinkingMulti-step reasoning
Math (AIME 2025)92.4%96.7%88.1%90.3%
Coding (SWE-bench)61.2%69.1%64.0%58.7%
Context window128K200K200K2M
Open sourceYesNoNoNo
Input price$0.55/M$10/M$15/M$1.25/M
Output price$2.19/M$40/M$75/M$10/M

DeepSeek R2 offers roughly 80-95% of o3's performance at a fraction of the cost, making it an excellent choice for developers who need reasoning capabilities without breaking the bank.

How to Use DeepSeek R2 API#

Option 1: Direct DeepSeek API#

bash
curl https://api.deepseek.com/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-deepseek-key" \
  -d '{
    "model": "deepseek-reasoner",
    "messages": [
      {"role": "user", "content": "Prove that the square root of 2 is irrational."}
    ]
  }'

Using Crazyrouter gives you access to DeepSeek R2 alongside 300+ other models through a single API key:

Python Example#

python
from openai import OpenAI

client = OpenAI(
    api_key="your-crazyrouter-key",
    base_url="https://api.crazyrouter.com/v1"
)

response = client.chat.completions.create(
    model="deepseek-reasoner",
    messages=[
        {
            "role": "user",
            "content": "A farmer has 100 meters of fencing. What dimensions of a rectangular pen maximize the enclosed area? Show your reasoning step by step."
        }
    ],
    stream=True
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Node.js Example#

javascript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-crazyrouter-key',
  baseURL: 'https://api.crazyrouter.com/v1'
});

async function reasonWithDeepSeek(problem) {
  const response = await client.chat.completions.create({
    model: 'deepseek-reasoner',
    messages: [
      { role: 'user', content: problem }
    ]
  });

  // The response includes reasoning_content (thinking) and content (answer)
  const message = response.choices[0].message;
  console.log('Reasoning:', message.reasoning_content);
  console.log('Answer:', message.content);
  return message;
}

reasonWithDeepSeek(
  'Write a function to find the longest palindromic substring in O(n) time.'
);

Handling Reasoning Tokens#

DeepSeek R2 returns both reasoning tokens (the thinking process) and answer tokens. Here's how to handle them:

python
response = client.chat.completions.create(
    model="deepseek-reasoner",
    messages=[{"role": "user", "content": "Solve: x^3 - 6x^2 + 11x - 6 = 0"}]
)

message = response.choices[0].message

# Access the reasoning chain
if hasattr(message, 'reasoning_content') and message.reasoning_content:
    print("=== Thinking Process ===")
    print(message.reasoning_content)

# Access the final answer
print("\n=== Answer ===")
print(message.content)

# Token usage breakdown
usage = response.usage
print(f"\nInput tokens: {usage.prompt_tokens}")
print(f"Reasoning tokens: {usage.completion_tokens_details.reasoning_tokens}")
print(f"Output tokens: {usage.completion_tokens}")

DeepSeek R2 Pricing#

ProviderInput PriceOutput PriceReasoning Tokens
DeepSeek Direct$0.55/M tokens$2.19/M tokensIncluded in output
Crazyrouter$0.55/M tokens$2.19/M tokensIncluded in output
OpenAI o3 (comparison)$10/M tokens$40/M tokensIncluded in output
Claude Opus 4.5 (comparison)$15/M tokens$75/M tokensSeparate pricing

DeepSeek R2 is approximately 18x cheaper than o3 for input and output tokens combined. For a typical reasoning task consuming 10K input + 50K output tokens, the cost comparison is:

  • DeepSeek R2: ~$0.12
  • OpenAI o3: ~$2.10
  • Claude Opus 4.5: ~$3.90

Best Use Cases for DeepSeek R2#

Mathematical Problem Solving#

R2 excels at multi-step mathematical reasoning, from algebra to calculus to number theory.

Code Generation and Debugging#

The model can write complex algorithms, debug existing code, and explain its reasoning process.

Scientific Analysis#

R2 handles scientific reasoning tasks well, including physics problems, chemistry equations, and data analysis.

Logical Puzzles and Planning#

Tasks requiring multi-step logical deduction benefit from R2's chain-of-thought approach.

Tips for Getting the Best Results#

  1. Be specific about reasoning requirements: Ask the model to "show step-by-step reasoning" or "explain your thought process"
  2. Use system prompts wisely: Set expectations for output format and detail level
  3. Leverage streaming: Reasoning responses can be long; use streaming to show progress
  4. Set appropriate max_tokens: Reasoning chains can be verbose; allocate enough tokens (4096+ recommended)
  5. Compare with other models: For critical tasks, run the same prompt through multiple reasoning models via Crazyrouter

Frequently Asked Questions#

Is DeepSeek R2 open source?#

Yes, DeepSeek R2 is released with open weights under a permissive license. You can self-host it, though it requires significant GPU resources (8x A100 or equivalent for the full model).

How does DeepSeek R2 compare to R1?#

R2 improves on R1 with better reasoning accuracy (especially in math), longer context support (128K vs 64K), faster inference speed, and improved instruction following.

Can I use DeepSeek R2 for production applications?#

Absolutely. The model is production-ready and available through both DeepSeek's official API and aggregators like Crazyrouter. The open-source license also allows commercial use.

Does DeepSeek R2 support function calling?#

Yes, R2 supports function calling and tool use, making it suitable for agent-based applications that require reasoning about when and how to use tools.

What languages does DeepSeek R2 support?#

R2 performs best in English and Chinese, with reasonable performance in other major languages. For multilingual applications, consider pairing it with models that have stronger multilingual capabilities.

Summary#

DeepSeek R2 delivers impressive reasoning capabilities at a fraction of the cost of competing models. Whether you're building math tutoring apps, code assistants, or complex analytical tools, R2 offers an excellent balance of performance and affordability.

Get started with DeepSeek R2 and 300+ other AI models through Crazyrouter — one API key, unified access, and competitive pricing. Sign up today and start building.

Related Articles