For the past few months, I’ve been obsessed with a single question: Why do our AI assistants live in the cloud, while our work lives on our machines?
Most AI coding assistants today are great, but they feel like “visitors.” They have a limited context of your system, they require expensive subscriptions, and they often struggle with long-running tasks that require actual autonomous execution.
That’s why I built Tars.
What is Tars?
Tars is an autonomous, local-first AI assistant powered by Google’s Gemini models. Unlike a standard CLI wrapper, Tars uses a Supervisor-Orchestrator model. It lives in your terminal, but it acts like a background service that can manage its own memory, schedule its own tasks, and even heal itself if things go wrong.
Core Features:
- Autonomous Persistence: Tars has a background “Heartbeat” service. You can give it a complex task (like refactoring a library or monitoring a server), and it will work on it autonomously until it’s done—sending you a notification via Discord or WhatsApp when it needs you.
- Tiered Memory System: Tars maintains a
facts.jsonfor long-term preferences and project-specificGEMINI.mdfiles. It doesn’t just “chat”; it remembers your architecture decisions and personal habits. - Multi-Channel Interface: You can interact with your local Tars instance from anywhere via Discord or WhatsApp. It’s like having your workstation’s brain in your pocket.
- Extensible via MCP: Tars can write its own tools and extensions using the Model Context Protocol. It already has deep integrations with Google Workspace, Playwright (for browser use), and health data (Ultrahuman).
- Free Inference: Because it uses the Gemini API, most developers can run Tars entirely within the free tier, avoiding the $20/mo “AI Tax.”
Why “Local-First”?
I believe the future of AI isn’t just bigger models, but better infrastructure. Tars prioritize local redirects, local configuration, and local data storage. Your memories and tasks are yours.
Join the Journey
Tars is currently in active development (v1.11.1). I’m building it in public and would love to hear from other devs who are interested in:
- Autonomous Agents: How to build reliable supervisor models.
- Local Intelligence: Reducing reliance on SaaS for core dev workflows.
- Terminal UX: Making the CLI feel like a high-performance workspace.
Check it out here: https://tars.saccolabs.com (or hit me up in the comments!)
What are you building in the autonomous agent space? I’d love to swap notes.