How to Make Cursor, GitHub Copilot, and Claude Code Follow Your Team's Conventions
A practical guide to configuring every major AI coding tool - Cursor, GitHub Copilot, Claude Code, Codex, Windsurf, Gemini CLI - to respect your team's patterns, or automating the entire process with rvue.
Every AI coding tool you use - Cursor, GitHub Copilot, Claude Code, Codex, Windsurf, Gemini CLI - generates code in its own default style. Without explicit guidance, they won’t match your team’s naming, structure, or architectural choices. This guide walks through the manual way to configure each tool, then shows how a single automated approach can replace six separate config files.
The problem: every tool, different style
When you ask Cursor to add an API route, Copilot to write a test, or Claude Code to refactor a component, the output is syntactically correct but often doesn’t match how your team actually does things. You end up rewriting variable names, moving logic into different modules, or stripping out patterns your codebase avoids. As we explored in why AI tools ignore your team’s patterns, the models have no built-in view of your conventions; they need to be given that context explicitly.
Manual approach: one config per tool
Each major AI coding tool supports project-level instructions. Here’s where they live and what typically goes in them.
Cursor: .cursor/rules/
Cursor reads rule files from .cursor/rules/. You can add .mdc or .md files that describe coding standards, file layout, and patterns. Rules can be global or scoped by glob (e.g. only for **/api/**). Put in things like: “API routes live under app/api/, use our shared error type,” or “Never use default exports for components.” Cursor merges these into the context when generating or editing code.
GitHub Copilot: .github/copilot-instructions.md
Copilot looks for .github/copilot-instructions.md at the repo root. This single file is the main place to document conventions, tech stack, and patterns. Describe your folder structure, testing style, and any anti-patterns. Copilot uses it to tailor suggestions in the IDE and in Copilot-powered code review.
Claude Code: CLAUDE.md
Claude Code (and Claude in the IDE) looks for CLAUDE.md at the project root. Use it like a “project brief”: architecture, key directories, coding standards, and how to run tests or scripts. The more precise you are, the more Claude’s edits align with your codebase.
Codex: codex.md or project instructions
Codex and similar CLI/API code generators often support a project instructions file - sometimes codex.md or a path you pass in. Content is similar: conventions, stack, and patterns so generated code fits your repo.
Windsurf: .windsurfrules
Windsurf uses .windsurfrules (or similar) at the repo root. Again, you describe how the project is structured and how code should look so that Windsurf’s suggestions match your style.
Gemini CLI: GEMINI.md
Gemini’s CLI and editor integrations can read a GEMINI.md file at the root. Use it to document conventions and patterns so that Gemini-generated code follows your team’s approach.
The cost of maintaining six configs
Manually keeping Cursor rules, copilot-instructions.md, CLAUDE.md, Codex instructions, .windsurfrules, and GEMINI.md in sync is painful. When you change a convention, you have to remember to update every file. New joiners (or new tools) might only read one of them. There’s no single source of truth and no guarantee that what you wrote is what the team actually does - it’s often outdated or incomplete. For more on the underlying gap, see what Agent Skills are and how they address this fragmentation.
Automated approach: one skill, every tool
rvue extracts your team’s conventions, anti-patterns, and domain knowledge from real PR history and turns them into a single Agent Skill. Tools that support the open Agent Skills / MCP discovery mechanism (including Cursor, Claude Code, Windsurf, and others) can auto-discover that skill. You don’t maintain six different instruction files - you run one command and get evidence-based, up-to-date guidance that reflects how your team actually codes.
What rvue extracts vs. what you’d write manually
Manually, you’re guessing: “We use X” or “Avoid Y.” rvue analyzes merged PRs to see which patterns appear repeatedly, how files are organized, how errors are handled, and what reviewers consistently ask for. The resulting skill describes real behavior, not aspirational docs. That’s the same intelligence that makes AI-assisted code review effective when it’s team-aware.
Before vs. after
Without team intelligence: the model suggests a generic API route, default exports, and error handling that doesn’t match your patterns. You spend time reshaping the output. With an rvue-generated skill: the model has explicit guidance about your routes, exports, and error style, so the first draft is much closer to what you’d merge. You still review, but less of the edit is “fix the style.”
Step-by-step
- Connect the repo. Sign in at the rvue dashboard and connect the GitHub repo you want to base the skill on. Only merged PRs are used; no code is stored beyond what’s needed to derive conventions.
- Run the command. In the repo root, run
npx rvue-cli enable. If you haven’t set up rvue yet, see getting started and Agent Skills for details. - Verify. The command generates or updates the Agent Skill files in the repo. Open the skill in your editor or check the rvue dashboard to confirm conventions and patterns look right. Any tool that supports the skill format will pick them up automatically.
You don’t need to choose between Cursor, Copilot, Claude Code, Codex, Windsurf, or Gemini CLI - you can use whichever fits your workflow. The goal is to give all of them the same team intelligence so generated code matches your conventions from the start.
Get started
If you’re tired of maintaining multiple instruction files or of AI output that doesn’t match your codebase, try the automated path: connect your repo, run npx rvue-cli enable, and let one Agent Skill feed every tool. For full setup and options, start with getting started and Agent Skills in the docs.