Enforce AI coding assistant instruction files via hooks. One command. Runs locally. Zero cost.
"Never hand-write migrations" fires on Write but not Edit. Editing existing migrations is fine.
tree-sitter scans your codebase. Writing to migrations/ triggers alembic rules even without "alembic" in the file.
"Never push without tests" silences after cargo test runs. Arai remembers what happened this session.
Every firing is appended to a local JSONL you can query. See which rules fire, on which tools, for which prompts. Nothing leaves your machine.
Runs as an MCP server. The agent can register new rules mid-session ("never touch /etc", "always run tests first") and Arai enforces them on the next tool call.
Only fires domain-specific rules. Principles already in CLAUDE.md stay silent.
Classify rules via Claude, Ollama, or any LLM CLI. Or use the built-in sentence transformer.
SQLite lookups on the hook path. No network calls. No LLM calls at runtime.
curl -sSf https://arai.taniwha.ai/install | sh
npm install -g @taniwhaai/arai
cargo install arai
brew install taniwhaai/tap/arai
cd your-project && arai init
Arai is the open-source guardrail core of Kete, Taniwha AI’s runtime reliability platform for AI coding agents. If you want semantic enrichment, impact analysis across callers and transitive dependents, or a hosted platform with team features, start with Kete. If you just want your instruction files enforced locally — no hosted service, no telemetry server — Arai is all you need.