How To Teach AI Habits — And Stop Repeating Yourself

Published January 28, 2026 on matbanik.info

Let me tell you about the worst part of my workday.

It’s not the hard problems. Hard problems are actually kind of fun — they’re puzzles, and puzzles have solutions. No, the worst part is something much more mundane.

It’s Tuesday afternoon. I’ve got 30 tabs open. Three different documents all titled “final.” A dozen notes scattered across apps. And somewhere in that mess — I know this for certain — is the one sentence that would unlock everything. The insight I had yesterday. The decision I already made.

But I can’t find it.

So I start over. I re-explain the context. I re-derive the conclusion. I waste an hour getting back to where I already was.

Sound familiar?

Pixel art illustration of a person working at a desk with an AI companion
Your AI Hero Assistant

The Day Everything Changed

Here’s what I eventually figured out: my bottleneck was never ideas. It was never intelligence, or speed, or even time. It was state. I could see a complex system quickly — and then lose the entire thread the moment I switched tasks.

And no amount of willpower was going to fix that. I’d tried harder notebooks. Better apps. More discipline. None of it stuck.

What finally worked was building something I’d never thought to build before: an external memory I could actually trust. Not just notes — but durable artifacts. Files. Checklists. Workflows. Things an AI could help me create, refine, and retrieve on demand.

This post is about how that works. Not the tools — tools come and go. But the principles. The seven ideas that show up again and again when you watch people who’ve figured this out.

But first, let’s talk about why most people never get there.

Why Chat AI Hits a Ceiling

Here’s the thing about ChatGPT, Claude, Gemini, or any of the AI assistants you might use: they’re incredible. Genuinely. The jump from “no AI” to “some AI” is life-changing.

But at some point, you hit a wall.

You ask a question. You get an answer. Maybe you ask a follow-up, get another answer. And then you move on. The conversation disappears. The insights evaporate. Next week, when you face the same problem, you start completely fresh.

That’s chat AI. It’s reactive. It’s stateless by default. It doesn’t remember, and it doesn’t learn — at least, not across sessions.

Agentic AI is different.

Not because the model is smarter, but because the system around it is different. There’s a loop. Plan something. Use tools to execute it. Verify the results. Save what you learned so next time is easier.

If chat AI is a brilliant stranger you meet at a party — helpful for the evening, then gone forever — agentic AI is a collaborator who takes notes, remembers your preferences, and shows up tomorrow with a plan.

The difference isn’t magic. It’s architecture. And you can build it yourself.

Antigravity file explorer showing the .agent folder with workflows and documentation
The .agent folder — where AI workflows live

Seven Principles That Actually Work

I spent months reading everything I could find from people who use AI seriously. Not just for demos — for real work. Shipping code. Writing reports. Managing projects.

And something strange happened: the same ideas kept showing up. Different words, different contexts, but the same core principles. They cluster around three things we’re all trying to protect: our time, our energy, and our sense of purpose.

Let me walk you through them.

The Time Principles

Start with a plan.

This is the single biggest lesson, and also the least exciting to hear: before you ask the AI to build anything, write down what you want. A spec. A brief. A one-paragraph description of “done.”

I know. It sounds like extra work. It feels like bureaucracy.

But here’s what happens without it: you ask for something vaguely, the AI gives you something vaguely right, and you spend the next hour fixing edge cases you never mentioned. The “fast” approach becomes slow. The rabbit holes multiply.

With a clear spec, something different happens. The AI has constraints. It can’t wander. The search space collapses to something manageable. And suddenly, first drafts are actually good.

There’s a phrase that stuck with me: “Work slower upfront to move faster overall.” It sounds paradoxical, but anyone who’s lived the alternative knows it’s true.

Verify before you trust.

The AI is confident. Always. It will tell you the code works. It will assure you the logic is sound. And sometimes — often, even — it’s right.

But sometimes it’s not.

This isn’t a criticism. It’s physics. Language models generate plausible text. That’s what they do. And plausible text can be wrong in ways that are hard to spot.

So the rule becomes: don’t accept “it’s done.” Demand proof. Tests that pass. Diffs you can read. Logs that show execution. Treat AI output like you’d treat a junior developer’s first attempt — optimistic, but requiring review.

The operators who move fastest aren’t the ones who trust blindly. They’re the ones who’ve built verification into the loop.

Use git like your life depends on it.

When you can undo any mistake instantly, you can take risks. When mistakes are reversible, speed becomes safe.

This is why the best AI operators commit constantly. Not at the end of a feature — at the end of every small step. Every atomic change. Every checkpoint.

Because here’s the truth: when the AI goes sideways (and it will), you don’t want to debug three hours of tangled changes. You want to reset to five minutes ago and try again.

git reset is faster than fixing. Every time.

The Energy Principles

Break everything into atoms.

Large tasks break AI. I don’t mean “make it slower” — I mean make it wrong. The longer and more complex the request, the more likely the model is to drift, to lose the thread, to compound small errors into large ones.

The solution is decomposition. Take that big task and shatter it into pieces small enough that you could do them yourself in 15 or 20 minutes. Steps so clear that “done” is obvious.

Here’s a test: if you can’t tell whether a step is complete, it’s too big. Break it down further.

This feels tedious at first. But the payoff is enormous. Each small step succeeds. Each small success builds on the last. And suddenly, the impossible project becomes a series of tractable problems.

Externalize your context.

Here’s the thing nobody tells you about long AI sessions: they decay.

At the start, the AI remembers everything. The goal. The constraints. The decisions you made along the way. But as the conversation grows, the context window fills up. Old information gets pushed out. The AI starts forgetting.

And you don’t notice — not at first. The responses still sound confident. But they’re drifting. Slowly, subtly, the AI loses the plot. And you spend more and more energy re-explaining things you’ve already covered.

The fix is counterintuitive: stop relying on the conversation to remember things. Instead, externalize your context. Keep a file — call it activeContext.md or project-state.md or whatever you like — that captures the current goal, the key decisions, the open questions. Feed it to the AI at the start of each session.

In my experience, this single practice — maintaining a living state file — can dramatically reduce agent drift. The AI doesn’t forget, because you’re not asking it to remember.

MCP Servers configuration panel showing connected external tools
MCP servers extend what your AI can do — and remember

The Purpose Principle

Separate the roles.

There’s a bargain at the heart of working with AI: you think, it executes. You make the strategic decisions — what to build, why it matters, what good looks like. The AI handles the tactical work — syntax, boilerplate, the tedious bits that drain your attention.

When this works, it’s beautiful. You stay in the creative, strategic zone. The frustrating, repetitive work disappears. Flow becomes possible again.

When it breaks down — when you start doing the AI’s job for it, or stop understanding what it’s producing — something gets lost. Not just efficiency, but capability. There’s a real risk of “cognitive atrophy” — forgetting how to do the things you’ve outsourced.

The solution is clarity. Know what’s yours. Know what’s the AI’s. Protect your role as the architect, and let the AI be the builder.

The Memory That Makes It Stick

You might have noticed a pattern in these principles: they all depend on persistence. Plans that survive sessions. Context that doesn’t decay. Decisions that stay made.

But AI is stateless. Every time you start a new conversation, it forgets everything.

So where does the persistence come from?

From you. From the files you maintain, the workflows you build, the documentation you create. This is what I mean by an “external memory” — a system of artifacts that lives outside the AI, that the AI can read and write to, but that persists independently.

Think of it as a hippocampus for your projects. The long-term memory that the AI doesn’t have.

There’s a protocol for this now — an open standard called MCP, the Model Context Protocol. It lets AI systems connect to external tools and data sources. Database queries. Web searches. File operations. All the things an AI can’t do in a chat window, but suddenly can when it’s connected to the right servers.

And this is where things get interesting.

A Tool I Built to Solve This

I ran into the same problems over and over.

Context disappearing between sessions. Important decisions getting lost in chat history. Research that burned tokens just to parse HTML. JSON files that changed in ways I couldn’t track.

So I built something to fix it.

It’s called Pomera, and it’s an MCP server that works with any IDE that supports the protocol — Cursor, VS Code with Cline, Claude Desktop, Antigravity, and others. Think of it as a toolkit for the things AI struggles with.

Need to save a file before a risky refactor? One command creates a backup. Want to search across all your notes from every session? There’s a full-text search for that. Comparing two API responses? It does semantic diffs — showing you what actually changed in the data, not just which lines moved.

There’s web search built in. URL reading that strips out the junk. Two dozen text operations that would otherwise burn tokens — extracting URLs from a page, cleaning up whitespace, normalizing formats.

It even auto-detects sensitive information — API keys, passwords, tokens — and encrypts them at rest without you asking.

I won’t pretend this is the only solution. But it’s the one I use every day, and it solves problems I couldn’t find good answers for elsewhere. If any of this resonates, the code is open source on GitHub.

The Meta Layer

So far, we’ve talked about optimizing your workflow — the principles and tools that make each session better.

But there’s a layer above that. A habit that separates good operators from great ones.

It’s this: review yourself.

Not just the AI’s output — your own process. At the end of a session, ask: what worked? What didn’t? Where did I give confusing instructions? Where did the AI waste effort because I wasn’t clear?

This is meta-cognition. Thinking about thinking. And it’s the fastest way to improve, because every session becomes data.

Try this: after your next significant work session, ask the AI to analyze your prompts. What patterns does it see? What would it change? You’ll learn something. Every time.

There’s another version of this, too. When the AI makes a mistake — uses a deprecated method, hallucinates a library function, ignores a constraint — don’t just fix the code. Update your documentation. Add the correct method to your patterns file. Add the constraint to your rules.

The mistake becomes impossible to repeat. Not because the AI learned, but because your system did.

Chat history with export option for reviewing past sessions
Always Export — good sessions are worth revisiting

Where to Go From Here

If you’re curious to explore deeper, there are some excellent resources out there.

The official Getting Started with Antigravity guide walks through the basics. LogRocket published a comprehensive developer’s guide that goes deeper on the agentic capabilities. And YouTube has dozens of tutorials — including this hands-on demo that shows the workflow in action.

I’m not going to give you step-by-step instructions here. Partly because those resources already exist, and they’re good. But mostly because exploring is more valuable than following. Open the MCP panel. Poke around the settings. See what’s possible.

That’s how you learn what works for you.

Pixel art adventurer at crossroads representing exploration of new possibilities
There’s more to discover than any one post can cover

Why This Matters Beyond Technology

But there’s something else I want to leave you with. Something that took me a while to see.

These principles aren’t really about AI at all. They’re about being human.

A few years ago, I read Jordan Peterson’s 12 Rules for Life. It’s a strange book — part psychology, part philosophy, part mythology — and it stuck with me in ways I didn’t expect. When I started noticing patterns in how effective AI operators work, I realized something unsettling: the principles are the same. Different vocabulary, same truths.

Let me show you what I mean.

Peterson’s sixth rule is “Set your house in perfect order before you criticize the world.” The idea is simple but profound: before you try to fix the chaos out there, fix the chaos within. Clean your room. Organize your life. Get your own affairs in order. Only then do you have the standing — and the clarity — to address larger problems.

Now think about the first principle we discussed: start with a plan. Before you ask the AI to build something, organize your own thinking. Write down what you want. Get your context in order. The parallel is exact. The AI can’t fix a problem you haven’t defined. And you can’t define a problem while your own thinking is scattered across 30 tabs.

Peterson’s tenth rule is “Be precise in your speech.” He argues that vague language creates vague outcomes — that fuzzy thinking propagates through our words and corrupts our actions. Precision isn’t pedantry. It’s the difference between problems that get solved and problems that fester.

This is exactly why decomposition works. Break tasks into atoms. Make them so clear that “done” is obvious. The precision of your language — your prompts, your specs, your definitions — determines whether the AI delivers something useful or wanders into the weeds.

Then there’s rule eight: “Tell the truth — or, at least, don’t lie.” Peterson frames this as an ethical imperative, but it’s also practical. Lies compound. They require more lies to sustain. Eventually, the structure collapses.

AI hallucinations are lies of a different kind — not intentional, but no less dangerous. The model tells you something with confidence. You accept it without verification. The lie propagates into your codebase, your report, your decision. The verification imperative isn’t just efficiency. It’s epistemology. It’s the commitment to building on truth, not on plausible fiction.

Visual representation of order overcoming chaos
Order defeats chaos — in code and in life

Rule seven: “Pursue what is meaningful, not what is expedient.” The expedient thing is to skip the planning phase. To prompt quickly and hope for the best. To accept the first answer and move on. But expedience has a cost — the rework, the drift, the wasted energy of doing things twice.

Meaningful work requires patience. It asks you to invest upfront. To build systems that persist. To trade the quick hit for the lasting structure. Every principle in this post is, in some way, a choice of meaning over expedience.

And finally, rule four: “Compare yourself to who you were yesterday, not to who someone else is today.” This is the heart of meta-cognition. The practice of reviewing your prompts, your process, your patterns — and getting better. Not better than someone else. Better than yesterday’s version of you.

Here’s what I’ve come to believe: the principles that make AI work are the principles that make life work. Order defeats chaos. Truth defeats confusion. Precision defeats vagueness. Meaning defeats nihilism.

AI is just the medium. The message is as old as civilization.

The Takeaway

Here’s what I want you to remember:

The leap from “chat AI” to “agentic AI” isn’t about better prompts. It’s about building systems that persist. Plans that survive sessions. Context that doesn’t decay. Verification that catches errors. An external memory you can trust.

This sounds like more work. And at first, it is.

But then something shifts. The cognitive overhead drops. You stop re-explaining. You stop losing context. You stop rebuilding what you’ve already built.

And you start spending your energy on the work that actually matters.

There’s a deeper lesson here, too. The principles that make AI effective are the same principles that make you effective — as a thinker, as a creator, as a human being. Order. Truth. Precision. Meaning. These aren’t tech concepts. They’re the foundations of a life well-lived.

AI didn’t invent these ideas. It just made them visible again.

So yes, build your workflows. Create your memory systems. Teach your AI habits that stick.

But don’t forget: you’re also teaching yourself.

I’m curious: What’s one repetitive task you’d love to never do manually again — and what’s stopped you from automating it?

Resources

Originally published on matbanik.info. Cross-posted with ❤️ to Dev.to.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

How to price a product: Your complete guide

Next Post

What It Is — What It Isn’t

Related Posts