Core Concepts

Understand the intelligence model that powers rvue - how conventions, anti-patterns, knowledge, and reviewer data are extracted, scored, and delivered.

Intelligence model

rvue produces a structured intelligence file (.rvue/intelligence.json) containing five categories of team knowledge, each derived from different data sources and ranked by confidence.

CategoryWhat it capturesPrimary source
ConventionsHow the team writes code (patterns, styles, rules)Code analysis + PR comments
Anti-patternsWhat gets PRs rejected or causes bugsClosed PRs + review comments
Knowledge graphDomain decisions (auth strategy, DB conventions, etc.)PR discussions
ReviewersWho knows what - expertise mapReview comment history
Risk factorsWhere problems cluster (hot files, large PRs)PR file changes

Conventions

Conventions are coding rules that your team follows consistently. Each convention has:

Properties

FieldTypeDescription
idstringStable identifier for merging across syncs
categoryenumOne of: imports, testing, naming, async, structure, error-handling, api, documentation, security, performance, type-safety, other
rulestringHuman-readable description of the convention
confidence0.0–1.0How consistently this pattern appears in the codebase
sourceenumHow it was discovered: code_analysis, pr_analysis, config_import, stack_baseline
evidence_countnumberNumber of files, comments, or config entries supporting this
examplesarrayCode examples showing the convention in practice

Confidence scoring

Confidence is calculated differently depending on the source:

  • Code analysis: ratio of files following the pattern (e.g., 92% named exports → 0.92 confidence)
  • Config import: always 0.9, since linter rules are explicit team decisions
  • PR analysis: AI-assigned based on how frequently the pattern appears in review feedback
  • Stack baseline: framework-specific defaults (e.g., "use pnpm" from package-lock detection)

Conventions below the minimum confidence threshold (default: 0.6) are filtered out during post-processing.

Anti-patterns

Anti-patterns are code practices that your team actively discourages - patterns that get PRs rejected, cause bugs, or violate team standards.

Severity levels

SeverityMeaningExample
CriticalSecurity vulnerability or data loss riskSQL string concatenation, exposed secrets
HighLikely to cause bugs or major quality issuesSwallowed exceptions, missing error handling
MediumCode quality concern, may cause issuesuseEffect without cleanup, missing type safety
LowStyle or minor improvementInconsistent formatting, verbose patterns

Anti-patterns are primarily extracted from closed PRs (rejected or requiring significant changes) and review comments that flag specific patterns.

Knowledge graph

The knowledge graph captures domain-specific decisions that your team has made - the kind of tribal knowledge that normally lives only in people's heads or scattered across PR discussions.

It's organized as a two-level structure:

Knowledge graph structure
{
  "authentication": {
    "provider": {
      "value": "Better Auth with GitHub OAuth",
      "confidence": 0.99,
      "source_prs": [12, 45]
    },
    "session_strategy": {
      "value": "Cookie-based sessions with Drizzle adapter",
      "confidence": 0.95,
      "source_prs": [12]
    }
  },
  "database": {
    "orm": {
      "value": "Drizzle ORM with Neon PostgreSQL",
      "confidence": 0.99,
      "source_prs": [8]
    }
  }
}

Topics are automatically categorized from PR discussion content. Each entry includes the source PRs so decisions can be traced back to their origin.

Reviewer expertise

Reviewer profiles map team members to their areas of expertise based on review activity patterns:

  • Expertise areas: which directories/modules they review most, with confidence scores
  • Review count: total reviews and per-area breakdown
  • Approval rate: percentage of reviews resulting in approval

Reviewer mapping requires a minimum of 3 review comments from a contributor to be included. Up to 15 reviewers are tracked per repository.

Maturity levels

Intelligence maturity indicates how much data rvue has to work with. More PRs analyzed means higher confidence and more nuanced patterns.

LevelLabelPRs requiredWhat you get
1Baseline0–9Local codebase analysis + config imports only
2Growing10–49PR-derived conventions start appearing
3Established50–99Strong convention coverage, anti-patterns detected
4Mature100+Full intelligence - conventions, anti-patterns, knowledge, reviewers

Growing intelligence

Intelligence improves over time. Each sync adds new PR data and refines confidence scores. Anti-patterns and knowledge graph entries typically require 20+ PRs to become meaningful.

Intelligence sources

Each piece of intelligence is tagged with its source, so you know how it was derived:

SourceDescriptionConfidence range
code_analysisStatistical analysis of your codebase (file patterns, export styles)0.5–0.95
config_importRules from ESLint, Prettier, TypeScript, EditorConfig0.9 (fixed)
stack_baselineFramework-specific defaults (React, Next.js, Express, etc.)0.7–0.95
pr_analysisAI extraction from merged PR review comments0.6–0.95
comment_analysisDirect review comment pattern matching0.7–0.9
git_historyContributor expertise from commit/file change patterns0.5–0.9

Merging and deduplication

When intelligence is synced incrementally, rvue merges new data with existing data using these rules:

  • Conventions: deduplicated by normalized rule text. Higher-confidence version wins.
  • Anti-patterns: merged by pattern description. Evidence lists are combined.
  • Knowledge: merged by topic + key. Newer entries overwrite older ones.
  • Reviewers: merged by GitHub login. Higher review count wins.
  • Maturity: PR count is cumulative. Never decreases.

This means intelligence is additive - syncing never removes previously discovered patterns unless they fall below the confidence threshold.