This is a submission for the GitHub Copilot CLI Challenge
What I Built
Ever had a 200-line function that “just worked” so you never refactored it? 🙈
The problem: Code quality tools tell you what’s wrong (high complexity, tech debt, stale dependencies), but they don’t help you fix it. You’re left staring at a dashboard of red metrics with no clear path forward.
I built drift – a real-time codebase health monitor that doesn’t just identify problems, it helps you fix them with AI. Think Datadog for your codebase, but with a GitHub Copilot-powered “fix it for me” button.
Three Ways drift Uses GitHub Copilot CLI
1. Interactive Fixing — drift fix calls the Copilot CLI to get AI refactoring advice
2. Custom Agent — @drift commands extend Copilot with code health expertise
3. CI Automation — GitHub Actions use Copilot CLI to generate friendly PR comments
drift watches your Go/TypeScript/Python/Rust/Java/Ruby/PHP/C# projects in real-time and provides:
- Cyclomatic complexity analysis with sparkline trends
- Dependency freshness tracking against package registries
- Architecture boundary violation detection
- Dead code identification
- AI-powered diagnostics and refactoring suggestions
As a developer who’s inherited way too much tech debt, I wanted a tool that goes beyond “you have problems” to “here’s how to solve them.” GitHub Copilot CLI made this possible – AI can now be your pair programming partner for refactoring, not just new code.
Demo
Quick Start
brew install copilot-cli # GitHub Copilot CLI
go install github.com/greatnessinabox/drift@latest
cd your-project
drift
Live TUI Dashboard
The dashboard shows:
- Health Score (0-100) with gradient color bar and sparkline trends from git history
- Complexity Panel — Functions sorted by cyclomatic complexity with severity indicators
- Dependencies Panel — Package freshness with days since latest release
-
Boundaries Panel — Architecture violations (e.g.,
api → dbforbidden imports) - Activity Panel — Real-time file change notifications
Navigate with Tab, refresh with r, press d for AI diagnosis, q to quit.
Interactive AI-Powered Fixing
$ drift fix
🔍 Analyzing codebase... (Score: 78/100)
Found 3 issue(s) to fix:
1. [🔴 HIGH] model.Update() in app.go:126 (complexity: 25)
2. [🟡 MEDIUM] validateInput() in handler.go:89 (complexity: 18)
3. [🟢 LOW] parseConfig() in config.go:34 (complexity: 12)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[1/3] model.Update() in app.go:126 (complexity: 25)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🤖 Asking GitHub Copilot for suggestions...
[Copilot provides context-aware refactoring advice]
Apply this suggestion? [y/N/s(kip rest)]
What’s happening:
- drift analyzes your codebase and identifies complex functions
- For each issue, it extracts the function source code
- It builds a language-aware prompt: “Refactor this function to reduce complexity from 25 to <10..." with source code and metrics
- It calls
copilot -pto get AI suggestions in non-interactive mode-s --add-dir - You review and choose: accept, reject, or skip remaining suggestions
Real-World Test: Running drift fix on an OSS Codebase
I ran drift fix against sinew, my open-source React pattern library (150 files, 998 functions):
$ drift report
Health Score: 55/100
Top complexity offenders:
🔴 Playground() playground.tsx:128 (complexity: 48)
🔴 add() patterns.ts:89 (complexity: 47)
🔴 PatternPage() [id].tsx:45 (complexity: 45)
$ drift fix --limit 1
🔍 Analyzing codebase... (Score: 55.0/100)
Found 1 issue(s) to fix:
1. [🔴 HIGH] Playground() in playground.tsx:128 (complexity: 48)
🤖 Asking GitHub Copilot for suggestions...
Copilot CLI read the actual source file via --add-dir, analyzed the 48-complexity Playground() component, and returned a full refactoring plan:
-
Extracted custom hooks (
useKeyboardNavigation,useStepHighlighting) to reduce nested effects - Map-based dispatch for keyboard handlers instead of if-else chains
-
Component extraction (
OutputTabButton,StepsBadge,LogsBadge) to eliminate JSX conditionals -
Helper functions (
getVisualizationComponent,handleExecutionSuccess) to flatten switch statements
Estimated complexity reduction: 48 → ~12 — well below the threshold. This wasn’t generic advice — Copilot understood the React component structure, the custom hooks pattern, and TypeScript types.
Custom Agent with @drift Commands
Created a custom GitHub Copilot agent with 10+ specialized commands:
# Analyze specific directory
copilot --agent drift-dev "@drift analyze internal/analyzer/"
# Get refactoring suggestions for a function
copilot --agent drift-dev "@drift suggest-refactor complexFunction()"
# Explain what cyclomatic complexity means
copilot --agent drift-dev "@drift explain complexity"
# Compare health between commits
copilot --agent drift-dev "@drift compare HEAD~5 HEAD"
# Get language-specific best practices
copilot --agent drift-dev "@drift best-practices go"
The agent understands drift’s architecture and metrics, acting as a domain expert that can:
- Analyze code patterns and suggest improvements
- Explain why certain metrics matter
- Compare health across git history
- Debug analyzer issues
- Recommend best practices
GitHub Action for PR Health Checks
Created .github/workflows/drift-health.yml that runs on every PR:
- name: Run drift analysis
run: |
go install github.com/greatnessinabox/drift@latest
drift check --fail-under 70 > report.txt
- name: Generate Copilot-powered summary
run: |
copilot -p "Create a friendly PR comment summarizing this drift report: $(cat report.txt)" -s --no-auto-update
Instead of just failing CI with a red X, the Action uses Copilot to generate helpful comments:
“Hey! 👋 I noticed a few complex functions in this PR. The
handleRequest()function (complexity: 23) might benefit from extracting the validation logic into a separate helper. The health score is currently 68/100, just under the 70 threshold. Want me to suggest a refactoring? 🤖”
Much friendlier than “COMPLEXITY CHECK FAILED ❌”
Try It Yourself
- Repo: https://github.com/greatnessinabox/drift
- Docs: README.md
- Custom Agent: drift-dev.agent.md
- GitHub Action: drift-health.yml
My Experience with GitHub Copilot CLI
The Journey: Three Integration Patterns
Phase 1: Discovery (Custom Agents)
I started by reading the Copilot CLI docs and discovered custom agents. Creating .github/agents/drift-dev.agent.md felt like defining an API – I specified commands, described drift’s domain, and explained the codebase structure.
The magic happened when I tried it:
copilot --agent drift-dev "explain complexity"
Suddenly, Copilot understood drift’s domain. It could explain cyclomatic complexity in the context of the project, suggest which files to refactor first based on drift’s metrics, and even compare health trends across commits.
This wasn’t generic AI – it was an expert assistant that “knew” drift.
Key learning: Custom agents transform Copilot from a general coding assistant into a specialized consultant for your project.
Phase 2: Deeper Integration (Interactive Fixing)
Next challenge: Could I build drift fix to call Copilot programmatically from within the tool itself?
Turns out, yes – via subprocess:
prompt := buildCopilotPrompt(cfg, issue) // includes source code + metrics
cmd := exec.Command("copilot",
"-p", prompt,
"-s", // silent: output only the AI response
"--add-dir", cfg.Root, // grant file access to the project
"--no-auto-update",
)
output, err := cmd.CombinedOutput()
The key insight: Prompt engineering matters.
Initial prompts gave generic advice:
❌ “Refactor this function to reduce complexity”
→ Generic response about extracting functions
Better prompts with context gave 10x better suggestions:
✅ “Refactor this Go function to reduce complexity from 25 to <10:
File: app.go:126-180
Function: model.Update()
Current issues: Nested if/else, multiple return paths
Suggest extract-method refactoring with descriptive function names that follow Go conventions.”
→ Specific, actionable refactoring with code examples
What I learned:
- The more context you give Copilot (source code, file paths, specific metrics, goals), the better it performs
- Structured prompts with clear sections (context, code, goal) work best
- Copilot excels at extract-method and extract-variable refactorings
- Including current complexity metrics helps it understand the severity
Phase 3: CI Automation (GitHub Actions)
The final piece: Using Copilot in CI pipelines to make automation friendlier.
I created a GitHub Action that:
- Runs drift analysis on every PR
- Captures the output (health score, issues, trends)
- Calls
copilot -pto transform the technical report into a friendly comment - Posts it to the PR
The transformation is incredible. Instead of:
Drift Health Check FAILED
Score: 68/100 (threshold: 70)
High complexity functions: 3
Stale dependencies: 2
Boundary violations: 1
PRs now get:
Hey! 👋 The health score is at 68/100, just under the 70 threshold.
I found a few opportunities to improve:
🔴 High Complexity:
- model.Update() in app.go:126 (complexity: 25)
Consider extracting the validation logic into a helper function
🟡 Stale Dependencies:
- axios is 180 days old (latest: 1.7.0)
Updating would include security fixes
Want me to suggest refactorings for these? Just comment "@drift fix" 🤖
Impact: Teams actually read and act on these comments vs ignoring red CI failures.
Real-World Development Impact
Velocity: Building drift took ~3 days with Copilot CLI vs my estimated week solo. The custom agent answered architecture questions instantly (no context-switching to docs), and Copilot CLI’s non-interactive mode (copilot -p) for implementation saved hours of Stack Overflow diving.
Quality: Copilot’s refactoring suggestions were genuinely good – not toy examples. I accepted ~70% without modification. The extract-method patterns it suggested were often cleaner than my usual approach, following proper Go/TypeScript conventions.
Learning: I discovered integration patterns I didn’t know existed:
- Custom agents can have domain-specific command vocabularies
- Subprocess calls enable programmatic AI usage in any language
- Copilot in CI makes automation feel less robotic and more helpful
- Prompt engineering is more important than I thought
Surprises (Good and Bad)
What Exceeded Expectations:
- Copilot’s refactoring suggestions were production-ready, not academic examples
- The custom agent “remembered” drift’s architecture across conversations – it understood the relationship between analyzers, the health scorer, and the TUI
- Prompt engineering was intuitive once I started including code context and specific goals
- Error messages from Copilot CLI were helpful when I messed up prompts
Challenges:
- Error handling: When Copilot CLI isn’t installed, drift needed graceful fallbacks (now shows helpful install message)
- Prompt length limits: Very long functions (500+ lines) had to be truncated or extracted into focused prompts
- Interactive terminal UX: Capturing Copilot’s output and presenting it nicely in a TUI required careful stream handling
- Rate limiting: Rapid-fire suggestions could hit API limits (added throttling and caching)
Advice for Builders Using Copilot CLI
If you’re building Copilot CLI integrations, here’s what I learned:
-
Start with a custom agent — It’s the easiest way to extend Copilot with domain knowledge. Just create
.github/agents/your-agent.agent.mdand define commands. -
Context is king — The more context you provide in prompts (code, metrics, goals, constraints), the better suggestions you get. Don’t just say “refactor this” – explain why, what metrics are concerning, and what good looks like.
-
Non-interactive mode is powerful — The standalone Copilot CLI’s
-pflag (prompt) and-sflag (silent output) make programmatic integration seamless. Just callcopilot -p "from your tool via" -s exec.Command()(Go),subprocess.run()(Python), or equivalent. -
User agency matters — Let users review AI suggestions before applying them. The “accept/reject/skip” pattern in
drift fixrespects developer judgment. Trust, but verify. -
Handle missing dependencies — Not everyone has Copilot CLI installed. Detect this early (
exec.LookPath("copilot")) and show a friendly installation guide. -
Structured prompts > prose — Format prompts with clear sections (Context, Code, Goal, Constraints). Copilot parses structure better than long paragraphs.
-
Test edge cases — What if the function has no source code available? What if Copilot returns unexpected output? What if the user cancels mid-process? Handle these gracefully.
What Worked Really Well
The three-layer integration (custom agent + interactive fixing + CI automation) created a coherent AI-enhanced experience:
- Development time: Custom agent answers questions and explains decisions
-
Refactoring time:
drift fixprovides on-demand suggestions - Review time: GitHub Action makes PR reviews more actionable
Each layer complements the others. The custom agent teaches developers about code health concepts, drift fix applies that knowledge to their code, and the GitHub Action ensures quality gates are friendly, not frustrating.
What’s Next for drift
I’m excited to expand the Copilot integration:
- Automatic patch application: Generate and apply refactoring patches directly (with approval)
- Real-time suggestions in the TUI: Show Copilot suggestions as you edit files in the live dashboard
- Team dashboards: Aggregate health across repositories with Copilot-generated insights
- Multi-AI support: While built for Copilot CLI, drift also works with Claude Code, Cursor, Aider, and other AI assistants (see AI_AGENTS.md)
Try It
If you maintain a Go/TypeScript/Python/Rust/Java/Ruby/PHP/C# codebase, give drift a spin:
brew install copilot-cli # Copilot CLI
go install github.com/greatnessinabox/drift/cmd/drift@latest # drift
cd your-project
drift fix
I’d love feedback on the Copilot integration patterns – what works, what doesn’t, what would make it more useful?
Closing Thoughts
GitHub Copilot CLI transformed drift from a monitoring tool into an AI-powered improvement engine. The three integration patterns (custom agent, interactive fixing, CI automation) demonstrate how AI can enhance every stage of development:
- Local iteration: Custom agent as expert consultant
- Focused refactoring: Interactive suggestions with approval workflow
- Code review: Friendly, actionable PR comments
The future of developer tools isn’t just identifying problems – it’s helping solve them with AI assistance. With Copilot CLI, we can build that future today.
The best part? These patterns are reusable. Any CLI tool can:
- Create a custom agent (
.github/agents/) with domain-specific commands - Call
copilot -p "programmatically for AI-powered features" -s - Use Copilot CLI in CI to make automation more human-friendly
Thanks for reading! 🚀 Questions? Find me in the comments or open an issue on the drift repo.
Repository: https://github.com/greatnessinabox/drift
Custom Agent: drift-dev.agent.md
Demo GIFs: Quick | TUI | Dashboard
