Skip to content

Tool Calling in LLM

Key idea:

Tool Calling (aka Function Calling) — the way an LLM invokes external functions via structured output (usually JSON). Client supplies a tool schema → LLM decides which to call with what args → client executes → result returns to LLM → LLM synthesises final answer. Standard in OpenAI, Anthropic, Gemini APIs. Needed for agents, on-demand RAG, database queries, external integrations.

Below: details, example, related terms, FAQ.

Try it now — free →

Details

  • Tool schema: name, description, JSON Schema for args
  • LLM returns structured output { "tool": "name", "args": {...} }
  • Client executes tool → returns result to LLM
  • Parallel tool calls: LLM can invoke multiple tools simultaneously (OpenAI 2023+)
  • Model Context Protocol (MCP) — Anthropic 2024 spec for tool standardisation

Example

# OpenAI tool calling
tools = [{
  'type': 'function',
  'function': {
    'name': 'get_weather',
    'description': 'Get current weather',
    'parameters': {
      'type': 'object',
      'properties': {'location': {'type': 'string'}},
      'required': ['location']
    }
  }
}]

response = openai.chat.completions.create(
  model='gpt-5',
  messages=[{'role':'user','content':'Weather in Moscow?'}],
  tools=tools
)
# response.choices[0].message.tool_calls → [{name, arguments}]

Related Terms

Learn more

Frequently Asked Questions

Is tool calling reliable?

2026 models (GPT-5, Claude Opus 4.7) — 95%+ accuracy on simple tools. Complex tools (nested schemas) — test them.

What is MCP?

Model Context Protocol from Anthropic (2024) — standard for exposing tools. Clients (Claude Desktop, Zed IDE) connect to MCP servers (file system, GitHub, Slack, etc).

Security?

LLM may call the wrong tool or with bad args. Check authorisation on the server, do not trust LLM output. Sandbox tool execution.