Core Concepts
Understand the intelligence model that powers rvue - how conventions, anti-patterns, knowledge, and reviewer data are extracted, scored, and delivered.
Intelligence model
rvue produces a structured intelligence file (.rvue/intelligence.json) containing five categories of team knowledge, each derived from different data sources and ranked by confidence.
| Category | What it captures | Primary source |
|---|---|---|
| Conventions | How the team writes code (patterns, styles, rules) | Code analysis + PR comments |
| Anti-patterns | What gets PRs rejected or causes bugs | Closed PRs + review comments |
| Knowledge graph | Domain decisions (auth strategy, DB conventions, etc.) | PR discussions |
| Reviewers | Who knows what - expertise map | Review comment history |
| Risk factors | Where problems cluster (hot files, large PRs) | PR file changes |
Conventions
Conventions are coding rules that your team follows consistently. Each convention has:
Properties
| Field | Type | Description |
|---|---|---|
| id | string | Stable identifier for merging across syncs |
| category | enum | One of: imports, testing, naming, async, structure, error-handling, api, documentation, security, performance, type-safety, other |
| rule | string | Human-readable description of the convention |
| confidence | 0.0–1.0 | How consistently this pattern appears in the codebase |
| source | enum | How it was discovered: code_analysis, pr_analysis, config_import, stack_baseline |
| evidence_count | number | Number of files, comments, or config entries supporting this |
| examples | array | Code examples showing the convention in practice |
Confidence scoring
Confidence is calculated differently depending on the source:
- Code analysis: ratio of files following the pattern (e.g., 92% named exports → 0.92 confidence)
- Config import: always 0.9, since linter rules are explicit team decisions
- PR analysis: AI-assigned based on how frequently the pattern appears in review feedback
- Stack baseline: framework-specific defaults (e.g., "use pnpm" from package-lock detection)
Conventions below the minimum confidence threshold (default: 0.6) are filtered out during post-processing.
Anti-patterns
Anti-patterns are code practices that your team actively discourages - patterns that get PRs rejected, cause bugs, or violate team standards.
Severity levels
| Severity | Meaning | Example |
|---|---|---|
| Critical | Security vulnerability or data loss risk | SQL string concatenation, exposed secrets |
| High | Likely to cause bugs or major quality issues | Swallowed exceptions, missing error handling |
| Medium | Code quality concern, may cause issues | useEffect without cleanup, missing type safety |
| Low | Style or minor improvement | Inconsistent formatting, verbose patterns |
Anti-patterns are primarily extracted from closed PRs (rejected or requiring significant changes) and review comments that flag specific patterns.
Knowledge graph
The knowledge graph captures domain-specific decisions that your team has made - the kind of tribal knowledge that normally lives only in people's heads or scattered across PR discussions.
It's organized as a two-level structure:
{
"authentication": {
"provider": {
"value": "Better Auth with GitHub OAuth",
"confidence": 0.99,
"source_prs": [12, 45]
},
"session_strategy": {
"value": "Cookie-based sessions with Drizzle adapter",
"confidence": 0.95,
"source_prs": [12]
}
},
"database": {
"orm": {
"value": "Drizzle ORM with Neon PostgreSQL",
"confidence": 0.99,
"source_prs": [8]
}
}
}Topics are automatically categorized from PR discussion content. Each entry includes the source PRs so decisions can be traced back to their origin.
Reviewer expertise
Reviewer profiles map team members to their areas of expertise based on review activity patterns:
- Expertise areas: which directories/modules they review most, with confidence scores
- Review count: total reviews and per-area breakdown
- Approval rate: percentage of reviews resulting in approval
Reviewer mapping requires a minimum of 3 review comments from a contributor to be included. Up to 15 reviewers are tracked per repository.
Maturity levels
Intelligence maturity indicates how much data rvue has to work with. More PRs analyzed means higher confidence and more nuanced patterns.
| Level | Label | PRs required | What you get |
|---|---|---|---|
| 1 | Baseline | 0–9 | Local codebase analysis + config imports only |
| 2 | Growing | 10–49 | PR-derived conventions start appearing |
| 3 | Established | 50–99 | Strong convention coverage, anti-patterns detected |
| 4 | Mature | 100+ | Full intelligence - conventions, anti-patterns, knowledge, reviewers |
✦Growing intelligence
Intelligence sources
Each piece of intelligence is tagged with its source, so you know how it was derived:
| Source | Description | Confidence range |
|---|---|---|
| code_analysis | Statistical analysis of your codebase (file patterns, export styles) | 0.5–0.95 |
| config_import | Rules from ESLint, Prettier, TypeScript, EditorConfig | 0.9 (fixed) |
| stack_baseline | Framework-specific defaults (React, Next.js, Express, etc.) | 0.7–0.95 |
| pr_analysis | AI extraction from merged PR review comments | 0.6–0.95 |
| comment_analysis | Direct review comment pattern matching | 0.7–0.9 |
| git_history | Contributor expertise from commit/file change patterns | 0.5–0.9 |
Merging and deduplication
When intelligence is synced incrementally, rvue merges new data with existing data using these rules:
- Conventions: deduplicated by normalized rule text. Higher-confidence version wins.
- Anti-patterns: merged by pattern description. Evidence lists are combined.
- Knowledge: merged by topic + key. Newer entries overwrite older ones.
- Reviewers: merged by GitHub login. Higher review count wins.
- Maturity: PR count is cumulative. Never decreases.
This means intelligence is additive - syncing never removes previously discovered patterns unless they fall below the confidence threshold.