Tool Guardrails
Tool guardrails control what tools an AI system can use and how.
They are critical for agentic and autonomous AI systems.
Why tool guardrails matter
Without controls, agents can:
- Execute destructive commands
- Write files unintentionally
- Access unauthorized APIs
- Escalate privileges
- Leak secrets via tools
Common tool guardrails
-
Tool Access Control
Restricts which tools can be invoked. -
IAM Permission Enforcement
Enforces permission boundaries. -
File Write Restrictions
Prevents unsafe file system writes. -
Destructive Action Detection
Blocks irreversible operations.
Enforcement model
Tool guardrails run:
- Before tool execution
- With full context (arguments, user, profile)
- In deterministic order
Example scenario
If an agent attempts to delete a database:
- The request is blocked
- The tool call is denied
- An audit event is emitted
Best practices
- Default to deny
- Explicitly allow tools
- Use separate profiles for agents
- Log every tool invocation
Next steps
- Learn how to write custom guardrails
- Learn about Profiles