CO
codebase-readiness
repo →scores your repo across 8 dimensions to measure ai agent readiness, then builds a fix roadmap
Codebase Readiness audits a repository against eight dimensions that predict whether autonomous coding agents can ship reliable pull requests at volume. The open-source scoring framework, hosted on GitHub, benchmarks teams against Stripe's reported cadence of 1,000+ AI-generated PRs per week. Engineering leaders at commerce companies use it to diagnose gaps in test coverage, documentation, modularity, and CI discipline before scaling Claude Code, Cursor, or similar agents across product teams.
> what it does
- Scores repositories across eight readiness dimensions for autonomous agent work
- Benchmarks against Stripe's 1,000+ AI-generated pull requests per week cadence
- Identifies test coverage and CI gaps that block agent reliability
- Flags documentation and modularity weaknesses before rolling out coding agents
- Runs as open-source workflow, no vendor lock-in or telemetry