Agentic AI
Safety for autonomous agents with tool execution.
Contributors
Overview
The Agentic AI profile is a cutting-edge security set designed for the new wave of autonomous agents. When an LLM can execute code, call APIs, or browse the web, the risk profile changes dramatically.
This profile acts as a sandbox, monitoring the intent and payload of tool calls. It prevents agents from executing destructive commands (like rm -rf), exfiltrating data via curl, or browsing to malicious application endpoints.
Included Guardrails
5 RulesTool Access Control Guardrail
Enforces fine-grained access control for tool invocation.
Destructive Tool Call Guardrail
Blocks high-risk or destructive tool invocations.
Command Injection Output Guardrail
Prevents generation of executable or shell-injection commands.
Sandboxed Output Guardrail
Restricts executable or actionable output to a safe sandbox.
File Write Restriction Guardrail
Restricts file system write access by tools or agents.
Key Benefits
Tool Sandboxing
Validates arguments of function calls to prevent injection attacks and misuse.
Destructive Action Block
Detects and blocks commands that delete data, stop services, or modify system configs.
Loop Prevention
Monitors for run-away agents stuck in execution loops.
Wait, when should I use this?
Integration
agent = GuardrailAgent(
profile="agentic-ai",
tools=[calculator, weather_api],
allow_file_system=False
)Frequently Asked Questions
Does this work with LangChain?
Yes, it integrates as a middleware in the LangChain execution loop.