CLAUDE.md that actually blocks.

Enforce AI coding assistant instruction files via hooks. Rules marked never deny the tool call. Every firing is audited and correlated with the action the model actually took. One command. Local. Zero cost.

$ curl -sSf https://arai.taniwha.ai/install | sh click to copy

Rules fire when they matter — and actually block

You: "Create a new database migration"

PreToolUse: Write migrations/versions/001_add_users.py
→ Arai: deny
  reason: "Alembic never: hand-write migration files"
        [from CLAUDE.md:12, layer-1 imperative]

Claude: "I should use alembic revision --autogenerate instead..."

Smart, not noisy

Intent-aware

"Never hand-write migrations" fires on Write but not Edit. Editing existing migrations is fine.

Code graph

tree-sitter scans your codebase. Writing to migrations/ triggers alembic rules even without "alembic" in the file.

Session tracking

"Never push without tests" silences after cargo test runs. Arai remembers what happened this session.

Block, don’t just advise

Rules with never/forbids/must_not emit permissionDecision: "deny". always and prefers still advise. Incremental rollout via ARAI_DENY_MODE=off.

Compliance tracking

Every PostToolUse is correlated against its PreToolUse firings. Each rule gets an obeyed, ignored, or unclear verdict. arai audit --outcome=ignored tells you which rules the model keeps flouting.

Derivation trace

Every firing carries source file, line, and parser layer. Hook output shows [CLAUDE.md:42 layer-1] — no more guessing why a rule fired.

Explain before you commit

arai why "git push --force" replays a hypothetical tool call through the live match pipeline. Read-only. Ship new rules with confidence.

Self-pruning rules

Annotate a rule with (expires 2026-12-31) or (until 2027-06-30). Arai filters it out after the date automatically — perfect for incident-driven rules that have a shelf life.

Audit log

Every firing is appended to a local JSONL you can query. See which rules fire, on which tools, for which prompts. Nothing leaves your machine.

Aggregate stats

arai stats rolls up the audit log into top rules, tools, and days — so you can see which guardrails are load-bearing and which never fire.

Rule regression tests

arai test replays synthetic hook payloads through the live match pipeline. Catch rule behaviour drift before a real session does. CI-friendly JSON output.

Capture to replay

arai record turns real firings from the audit log into scenario fixtures. You don’t hand-write regression tests — you capture the ones that matter and pin them.

Preview before commit

arai lint CLAUDE.md shows exactly which rules a file produces with their classified intent. Iterate on wording without touching the DB.

Shared policies

Inherit org-wide rules with one directive: arai:extends https://.... Trusted per URL, HTTPS only, cached locally. No policy service — just a markdown file upstream.

Agent-authored guards

Runs as an MCP server. The agent can register new rules mid-session ("never touch /etc", "always run tests first") and Arai enforces them on the next tool call.

Zero noise

Only fires domain-specific rules. Principles already in CLAUDE.md stay silent.

Any LLM enrichment

Classify rules via Claude, Ollama, or any LLM CLI. Or use the built-in sentence transformer.

<5ms latency

SQLite lookups on the hook path. No network calls. No LLM calls at runtime.

Install

Script
curl -sSf https://arai.taniwha.ai/install | sh
npm
npm install -g @taniwhaai/arai
Cargo
cargo install arai
Homebrew
brew install taniwhaai/tap/arai
Then cd your-project && arai init

Part of the Taniwha family

Arai is the open-source guardrail core of Kete, Taniwha AI’s runtime reliability platform for AI coding agents. If you want semantic enrichment, impact analysis across callers and transitive dependents, or a hosted platform with team features, start with Kete. If you just want your instruction files enforced locally — no hosted service, no telemetry server — Arai is all you need.