Hub/Profiles/Internal Tools
Beta / Experimental

Internal Tools

Safe defaults for internal employee-facing AI tools.

#enterprise
#security
980
Views
210
Likes
54
Used

Contributors

GD
AI

Overview

The Internal Tools profile is specifically engineered for applications used by employees within your organization. Unlike public-facing bots, internal tools often need to handle sensitive company data (code, docs, strategy) while preventing that data from leaking to external model providers or logs.

This profile focuses heavily on outbound data security—ensuring that secrets, keys, and proprietary PII don't accidentally leave your secure perimeter via the LLM prompt or logs.

Included Guardrails

5 Rules

Key Benefits

Data Leak Prevention

Aggressively scans for API keys, internal hostnames, and proprietary data markers.

Logging Safety

Ensures that sensitive inputs are redacted before being written to any application logs.

Model Stability

Pins model versions to prevent unexpected behavioral changes in internal workflows.

Wait, when should I use this?

Internal code assistant / co-pilot
HR policy Q&A bot
Sales strategy document summarizer

Integration

yaml
config.yaml
profile: internal-tools
overrides:
  internal-data-leak:
    patterns:
      - "CONFIDENTIAL"
      - "INTERNAL USE ONLY"
  secrets-in-logs:
    redact_mode: "hash"

Frequently Asked Questions

Can I use this for customer-facing bots?

It is not ideal. This profile prioritizes protecting company data over filtering NSFW content or jailbreaks, which are more critical for public bots.

Does it block all PII?

It warns on PII but allows employee names/emails if configured, assuming internal usage context.