Tool Calling (aka Function Calling) — the way an LLM invokes external functions via structured output (usually JSON). Client supplies a tool schema → LLM decides which to call with what args → client executes → result returns to LLM → LLM synthesises final answer. Standard in OpenAI, Anthropic, Gemini APIs. Needed for agents, on-demand RAG, database queries, external integrations.
Below: details, example, related terms, FAQ.
# OpenAI tool calling
tools = [{
'type': 'function',
'function': {
'name': 'get_weather',
'description': 'Get current weather',
'parameters': {
'type': 'object',
'properties': {'location': {'type': 'string'}},
'required': ['location']
}
}
}]
response = openai.chat.completions.create(
model='gpt-5',
messages=[{'role':'user','content':'Weather in Moscow?'}],
tools=tools
)
# response.choices[0].message.tool_calls → [{name, arguments}]2026 models (GPT-5, Claude Opus 4.7) — 95%+ accuracy on simple tools. Complex tools (nested schemas) — test them.
Model Context Protocol from Anthropic (2024) — standard for exposing tools. Clients (Claude Desktop, Zed IDE) connect to MCP servers (file system, GitHub, Slack, etc).
LLM may call the wrong tool or with bad args. Check authorisation on the server, do not trust LLM output. Sandbox tool execution.