Login
Back to Blog
"Function Calling Across AI Providers: A Unified Implementation Guide"

"Function Calling Across AI Providers: A Unified Implementation Guide"

C
Crazyrouter Team
February 20, 2026
16 viewsEnglishTutorial
Share:

Function calling (also called "tool use") is what turns a chatbot into an agent. Instead of just generating text, the AI can call functions in your code — search databases, send emails, check weather, execute trades, or interact with any API. It's the bridge between language understanding and real-world action.

Every major AI provider supports function calling, but each implements it slightly differently. This guide shows you how to implement it once and make it work everywhere.

How Function Calling Works#

The flow is the same across all providers:

code
1. You define available functions (tools) with JSON schemas
2. You send a message + tool definitions to the AI
3. The AI decides whether to call a function
4. If yes, it returns the function name + arguments (as JSON)
5. You execute the function locally
6. You send the result back to the AI
7. The AI generates a final response using the function result

The AI never executes code directly — it tells you what to call, and you execute it in your environment.

Defining Tools#

Tools are defined as JSON schemas that describe the function name, description, and parameters:

python
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name, e.g., 'San Francisco, CA'"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature unit"
                    }
                },
                "required": ["location"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search_products",
            "description": "Search the product catalog",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Search query"
                    },
                    "category": {
                        "type": "string",
                        "enum": ["electronics", "clothing", "books", "all"]
                    },
                    "max_price": {
                        "type": "number",
                        "description": "Maximum price filter"
                    }
                },
                "required": ["query"]
            }
        }
    }
]

Good tool descriptions are critical. The AI uses them to decide when and how to call each function.

Implementation: Python (OpenAI SDK)#

This works with OpenAI, Claude, Gemini, and any provider through Crazyrouter:

python
import json
from openai import OpenAI

client = OpenAI(
    api_key="your-crazyrouter-api-key",
    base_url="https://crazyrouter.com/v1"
)

# Define your actual function implementations
def get_weather(location: str, unit: str = "celsius") -> dict:
    """Your actual weather API call."""
    # Replace with real API call
    return {
        "location": location,
        "temperature": 22,
        "unit": unit,
        "condition": "Partly cloudy"
    }

def search_products(query: str, category: str = "all", max_price: float = None) -> dict:
    """Your actual product search."""
    # Replace with real database query
    return {
        "results": [
            {"name": "Wireless Headphones", "price": 79.99, "rating": 4.5},
            {"name": "Bluetooth Speaker", "price": 49.99, "rating": 4.2}
        ],
        "total": 2
    }

# Map function names to implementations
AVAILABLE_FUNCTIONS = {
    "get_weather": get_weather,
    "search_products": search_products,
}

def chat_with_tools(user_message: str, model: str = "gpt-4.1"):
    messages = [{"role": "user", "content": user_message}]
    
    # Step 1: Send message with tools
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        tools=tools,
        tool_choice="auto"  # Let the AI decide
    )
    
    assistant_message = response.choices[0].message
    
    # Step 2: Check if the AI wants to call functions
    if assistant_message.tool_calls:
        messages.append(assistant_message)
        
        # Step 3: Execute each function call
        for tool_call in assistant_message.tool_calls:
            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)
            
            print(f"Calling {function_name}({arguments})")
            
            # Execute the function
            func = AVAILABLE_FUNCTIONS.get(function_name)
            if func:
                result = func(**arguments)
            else:
                result = {"error": f"Unknown function: {function_name}"}
            
            # Step 4: Send the result back
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": json.dumps(result)
            })
        
        # Step 5: Get the final response
        final_response = client.chat.completions.create(
            model=model,
            messages=messages,
            tools=tools
        )
        
        return final_response.choices[0].message.content
    
    # No function call needed — return direct response
    return assistant_message.content

# Usage
print(chat_with_tools("What's the weather in Tokyo?"))
# Output: "The weather in Tokyo is currently 22°C and partly cloudy."

print(chat_with_tools("Find me headphones under $100"))
# Output: "I found 2 options: Wireless Headphones ($79.99, 4.5★) and..."

Implementation: Node.js#

javascript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-crazyrouter-api-key',
  baseURL: 'https://crazyrouter.com/v1'
});

// Function implementations
const availableFunctions = {
  get_weather: async ({ location, unit = 'celsius' }) => ({
    location,
    temperature: 22,
    unit,
    condition: 'Partly cloudy'
  }),
  
  search_products: async ({ query, category = 'all', max_price }) => ({
    results: [
      { name: 'Wireless Headphones', price: 79.99, rating: 4.5 },
      { name: 'Bluetooth Speaker', price: 49.99, rating: 4.2 }
    ],
    total: 2
  })
};

async function chatWithTools(userMessage, model = 'gpt-4.1') {
  const messages = [{ role: 'user', content: userMessage }];
  
  // First call with tools
  let response = await client.chat.completions.create({
    model,
    messages,
    tools,
    tool_choice: 'auto'
  });
  
  let assistantMessage = response.choices[0].message;
  
  // Process tool calls if any
  while (assistantMessage.tool_calls) {
    messages.push(assistantMessage);
    
    for (const toolCall of assistantMessage.tool_calls) {
      const fn = availableFunctions[toolCall.function.name];
      const args = JSON.parse(toolCall.function.arguments);
      
      console.log(`Calling ${toolCall.function.name}(${JSON.stringify(args)})`);
      
      const result = fn ? await fn(args) : { error: 'Unknown function' };
      
      messages.push({
        role: 'tool',
        tool_call_id: toolCall.id,
        content: JSON.stringify(result)
      });
    }
    
    // Get next response
    response = await client.chat.completions.create({
      model,
      messages,
      tools
    });
    
    assistantMessage = response.choices[0].message;
  }
  
  return assistantMessage.content;
}

// Usage
console.log(await chatWithTools("What's the weather in London?"));

Multi-Turn Tool Use (Agent Loop)#

For complex tasks that require multiple function calls:

python
def agent_loop(user_message: str, model: str = "gpt-4.1", max_iterations: int = 10):
    """Run an agent loop that can make multiple tool calls."""
    messages = [
        {"role": "system", "content": "You are a helpful assistant with access to tools. Use them when needed to answer questions accurately."},
        {"role": "user", "content": user_message}
    ]
    
    for i in range(max_iterations):
        response = client.chat.completions.create(
            model=model,
            messages=messages,
            tools=tools,
            tool_choice="auto"
        )
        
        assistant_message = response.choices[0].message
        messages.append(assistant_message)
        
        # If no tool calls, we're done
        if not assistant_message.tool_calls:
            return assistant_message.content
        
        # Execute all tool calls
        for tool_call in assistant_message.tool_calls:
            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)
            
            func = AVAILABLE_FUNCTIONS.get(function_name)
            result = func(**arguments) if func else {"error": "Unknown function"}
            
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": json.dumps(result)
            })
    
    return "Max iterations reached. Last response: " + messages[-1].get("content", "")

# This handles multi-step queries like:
# "Compare the weather in Tokyo and New York, then find me 
#  outdoor gear suitable for the warmer city"

Parallel Function Calling#

Some models can request multiple function calls simultaneously:

python
# The AI might return multiple tool_calls in a single response:
# tool_calls = [
#   {name: "get_weather", args: {location: "Tokyo"}},
#   {name: "get_weather", args: {location: "New York"}},
# ]

# Execute them in parallel for speed:
import asyncio

async def execute_tools_parallel(tool_calls):
    async def execute_one(tool_call):
        func = AVAILABLE_FUNCTIONS.get(tool_call.function.name)
        args = json.loads(tool_call.function.arguments)
        result = func(**args) if func else {"error": "Unknown"}
        return {
            "role": "tool",
            "tool_call_id": tool_call.id,
            "content": json.dumps(result)
        }
    
    return await asyncio.gather(*[execute_one(tc) for tc in tool_calls])

Provider Differences#

While the OpenAI format has become the de facto standard, there are subtle differences:

FeatureOpenAIClaudeGemini
Formattools + tool_callstools + tool_usetools + function_call
Parallel calls
Streaming + tools
Forced tool usetool_choice: {name}tool_choice: {name}tool_config
Max tools1286464

When using Crazyrouter, these differences are normalized. You write OpenAI-format tool definitions once, and they work across all providers:

python
# Same code, different models — Crazyrouter handles the translation
for model in ["gpt-4.1", "claude-sonnet-4-5", "gemini-2.5-flash"]:
    result = chat_with_tools("What's the weather in Paris?", model=model)
    print(f"{model}: {result}")

Best Practices#

1. Write Clear Tool Descriptions#

The AI uses descriptions to decide when to call functions. Be specific:

python
# ❌ Vague
{"name": "search", "description": "Search for things"}

# ✅ Specific
{"name": "search_products", "description": "Search the e-commerce product catalog by keyword. Returns product name, price, and rating. Use when the user asks about products, shopping, or prices."}

2. Validate Function Arguments#

Never trust AI-generated arguments blindly:

python
def safe_execute(function_name: str, arguments: dict):
    """Execute a function with validation."""
    func = AVAILABLE_FUNCTIONS.get(function_name)
    if not func:
        return {"error": f"Unknown function: {function_name}"}
    
    try:
        # Type checking and sanitization
        if function_name == "search_products":
            arguments["query"] = str(arguments.get("query", ""))[:200]
            if "max_price" in arguments:
                arguments["max_price"] = float(arguments["max_price"])
        
        return func(**arguments)
    except Exception as e:
        return {"error": f"Function execution failed: {str(e)}"}

3. Limit Tool Scope#

Only expose functions the AI actually needs:

python
# Context-dependent tool selection
def get_tools_for_context(user_role: str):
    base_tools = [weather_tool, search_tool]
    
    if user_role == "admin":
        base_tools.extend([delete_tool, update_tool])
    
    if user_role == "support":
        base_tools.extend([ticket_tool, refund_tool])
    
    return base_tools

4. Handle Errors Gracefully#

python
def execute_with_error_handling(tool_call):
    try:
        func = AVAILABLE_FUNCTIONS[tool_call.function.name]
        args = json.loads(tool_call.function.arguments)
        result = func(**args)
        return json.dumps(result)
    except json.JSONDecodeError:
        return json.dumps({"error": "Invalid arguments format"})
    except KeyError:
        return json.dumps({"error": f"Unknown function: {tool_call.function.name}"})
    except Exception as e:
        return json.dumps({"error": f"Execution failed: {str(e)}"})

FAQ#

Which AI model is best for function calling?#

GPT-4.1 and Claude Sonnet 4.5 are both excellent at function calling. GPT-4.1 tends to be more precise with argument formatting, while Claude handles complex multi-step tool use well. Test with your specific tools to see which performs better.

Can I use function calling with streaming?#

Yes. Tool call arguments are streamed as deltas. You need to concatenate the argument chunks before parsing the JSON. The OpenAI SDK handles this automatically.

How many tools can I define?#

OpenAI supports up to 128 tools, Claude and Gemini support 64. In practice, performance degrades with more than 20-30 tools. If you need more, use a two-stage approach: first call to select the relevant tool category, second call with the specific tools.

Does function calling work through Crazyrouter?#

Yes. Crazyrouter supports function calling for all major providers. You define tools in the OpenAI format, and Crazyrouter translates them to the provider's native format automatically.

Is function calling safe?#

The AI suggests function calls but never executes them directly. You control execution in your code. Always validate arguments, limit tool scope, and implement proper error handling.

Summary#

Function calling transforms AI from a text generator into an action-taking agent. The pattern is consistent across providers: define tools, let the AI decide when to use them, execute locally, and feed results back.

For the smoothest multi-provider function calling experience, Crazyrouter normalizes tool use across OpenAI, Claude, Gemini, and 300+ models. One API format, one key, all providers. Start building AI agents at crazyrouter.com.

Related Articles