Ticket-Safe Sanitizer

Docs

What We Redact

Reference list of tokens, credentials, and sensitive patterns masked by the Ticket-Safe Sanitizer tools.

Updated: 2026-02-24

What we redact

This page is the practical reference for default masking behavior across cURL, Log/JSON, and HAR workflows. It focuses on categories that are most likely to leak in support tickets: auth headers, API keys, cookies, token query params, cloud credentials, and private keys.

Advertisement

Use it as a review checklist after each sanitization run.

Why it matters

Most production incidents involve copied artifacts: raw logs, command lines, trace files, and payload snippets. These artifacts are useful for debugging, but they are also high-risk because secrets tend to appear next to error details. One accidental paste into a third-party system can expose access beyond the original incident scope.

Consistent redaction rules reduce that risk and improve communication quality. Teams can rely on predictable placeholders like [REDACTED:AUTH] or [REDACTED:QP], which makes snippets safer and easier to review.

Step-by-step checklist

  • Run the appropriate tool first: cURL Sanitizer, Log Sanitizer, or HAR Sanitizer.
  • Review the redaction report and verify expected categories were detected.
  • Confirm authentication values are masked in both header and inline formats.
  • Confirm service tokens are masked: GitHub, Slack, Stripe, SendGrid, AWS.
  • Confirm private key blocks are replaced entirely.
  • Confirm database URLs keep user/host context while masking passwords.
  • Confirm JSON key/value secrets (client_secret, password, token) are masked.
  • Run one final manual scan for business-specific internal secrets.

Safe snippet examples

Header and query redaction:

Authorization: [REDACTED:AUTH]
x-api-key: [REDACTED:API_KEY]
Cookie: [REDACTED:COOKIE]
GET /v1/orders?token=[REDACTED:QP]&signature=[REDACTED:QP]

Service key and token redaction:

ghp_******************************** -> [REDACTED:GITHUB_TOKEN]
xoxb-******************************* -> [REDACTED:SLACK_TOKEN]
sk_live_**************************** -> [REDACTED:STRIPE_KEY]
SG.******************************** -> [REDACTED:SENDGRID_KEY]
AKIA**************** -> [REDACTED:AWS_ACCESS_KEY_ID]

Private key and DB URL redaction:

-----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY-----
-> [REDACTED:PRIVATE_KEY_BLOCK]

postgres://app_user:supersecret@db.internal/prod
-> postgres://app_user:[REDACTED:DB_PASS]@db.internal/prod

Current default coverage

Core coverage includes:

  • Authorization headers and inline bearer tokens.
  • API key style headers (x-api-key, api-key, apikey).
  • Cookie and Set-Cookie headers.
  • Token-like query params (token, access_token, id_token, signature, auth).
  • JWT patterns in plain text.
  • Common PII markers: email, IPv4, and card-like numbers.
  • cURL-specific forms (-H, --header, -u, --user).
  • GitHub, Slack, Stripe, SendGrid, AWS key patterns.
  • Sensitive JSON key/value pairs and private key blocks.
  • Database connection strings with embedded credentials.

For HAR-specific cases, custom headers/keys and rule packs extend this baseline.

Manual review triggers after automated redaction

Even with broad default coverage, reviewers should scan for a few high-risk leftovers that are easy to miss in complex payloads:

  • custom secret fields with uncommon names
  • high-entropy blobs in nested objects
  • base64-like strings that may encode credentials
  • signature/token values in uncommon query keys
  • legacy config fields copied from env dumps

If these patterns appear, update your custom keys or rule packs before the next incident.

Coverage customization guidance

Default coverage is intentionally conservative to avoid destructive false positives. Teams should extend coverage with stack-specific keys and headers.

Recommended approach:

  • start with Rule Packs closest to your provider mix
  • add custom headers/keys used by internal services
  • run representative samples in Redaction Test Lab
  • review false positives before rolling out organization-wide

Treat coverage tuning as a living process, not one-time setup.

Redaction quality scorecard

A quick way to evaluate output quality:

  • Safety: no reusable credentials or obvious PII left
  • Utility: enough structure remains for reproducibility
  • Consistency: placeholders are predictable across tools
  • Transparency: report output matches observed replacements

When any dimension drops, update rules and docs together.

Reporting coverage gaps

If you find a gap, include fake-only evidence and expected replacement marker via Contribute. High-quality reports that include context (curl, log, or har) are merged faster and improve coverage for everyone.

Final pre-share check

Use this page as the baseline, then layer your internal key glossary on top. The best coverage comes from default rules plus environment-specific review patterns.