I’ve been a software developer for many years. Long enough to see waves of innovation come and go—new frameworks, languages, paradigms, all claiming to be game changers. Some were. Many weren’t. But each one taught me something new and helped shape the way I think, solve problems, and build systems. That’s why I’ve been watching the current wave of AI tools—especially generative AI—with a mix of curiosity and caution.
Do I use AI in my day-to-day work? Occasionally, yes. Tools like GitHub Copilot can speed up some parts of coding. They can help scaffold a function or remind me of a syntax detail I forgot. They’re useful in the same way autocomplete or Stack Overflow is useful: a reference, a boost, a convenience. But I don’t let these tools do the thinking for me. I prefer to keep my cognitive abilities active and sharp.
This isn’t about being a purist. It’s about being conscious of how I think and solve problems. Writing software isn’t just typing out code. It’s a deep mental process that involves logic, patience, understanding, iteration, sometimes even frustration. You can’t replace that process entirely with a tool—not without losing something important.
So when I hear high-profile tech leaders suggesting that we should stop programming altogether—or that programming will soon be obsolete—I can’t help but feel unsettled. It raises a lot of questions. What are we actually trying to optimize for? Convenience? Speed? Automation for its own sake?
Programming, as I understand it, is more than just giving machines instructions. It’s about understanding problems deeply, breaking them into smaller parts, and reasoning about how systems behave under different conditions. It takes math skills, critical thinking, creative problem-solving, and an incredible amount of patience. You don’t develop those qualities by clicking buttons or feeding prompts into a chatbot. You develop them by doing the work—writing code, debugging it, building things that fail and trying again.
Don’t get me wrong: AI is impressive. We’re living in an era where, for the first time, we can “program” machines using natural language. That’s a milestone. Being able to ask an AI to summarize logs, explain code, or generate a quick proof-of-concept is undeniably helpful. I use it for those tasks myself. It saves time, helps with pattern recognition, and sometimes even suggests better ways to approach a problem.
But relying too heavily on it concerns me. Because the more we lean on AI to write code, the less we practice the thinking behind it. And if we stop thinking critically about the systems we build, we risk losing sight of how they actually work. That’s not just a philosophical concern—it’s a practical one. Bugs, security issues, scalability problems… they don’t go away just because code was written by an AI. In fact, they can become harder to spot if we don’t fully understand what the AI is doing.
There’s another angle to this that I think about often: responsibility. When you write code yourself, you take responsibility for what it does. You debug it, you test it, you maintain it. When a machine writes code for you, who’s responsible when something goes wrong? Who ensures the logic is sound, the ethics are considered, the edge cases are covered? These questions don’t have clear answers yet, and I think it’s dangerous to pretend they don’t matter.
At the same time, I won’t deny that some aspects of this new era truly fascinate me. The idea that I can run open-source models like Mistral or Llama at home, plug them into frameworks like CrewAI, and train them to assist with parts of my workload—that’s powerful. I’ve experimented with it myself. It’s not about replacing my thinking but augmenting it. Having a local AI agent help with code analysis, system design, or even documentation—that’s genuinely exciting. It gives me new tools to do my job better.
But even as I explore these possibilities, I remain grounded. I know what it took to become a developer. I know the long nights spent learning, failing, and trying again. I’ve been through projects that collapsed under technical debt, and I’ve seen how a well-thought-out system can survive years of change. That kind of knowledge isn’t something you can download or prompt into existence. It’s earned over time.
I worry that we’re starting to treat software development as a commodity instead of a craft. That we’re chasing speed over understanding, automation over clarity. I worry that newer developers might be encouraged to rely on AI before they’ve built their own mental models of how things work. That’s a dangerous path because it creates a dependency—not just on tools, but on outcomes we don’t fully control or understand.
There’s also a larger societal implication. If we start telling people to stop learning how to code, we’re effectively gatekeeping one of the most powerful skills of the digital age. We’re saying, “Let the machine do it,” without asking what happens when that machine is wrong, biased, or unavailable. Teaching people how to code is not just about building apps—it’s about teaching them how to think, how to structure ideas, how to solve problems in a world increasingly run by software.
So no, I’m not planning to stop programming anytime soon. I will keep writing code by hand, thinking deeply about the systems I build, and using AI tools where they make sense—but not as a crutch. I’ll continue training my mind the way an athlete trains their body, because this work matters to me. Not just for the sake of the code, but for the discipline it builds and the thinking it demands.
This is not about resisting change or fearing progress. It’s about being deliberate in how we adapt. I welcome the advances AI brings, especially when it helps us automate the boring stuff, parse large volumes of data, or test ideas faster. But I also believe we need to stay connected to the core of our craft. We need to remain curious, analytical, and—most importantly—accountable.
There’s still a place for human programmers in this new world. In fact, I think we’re more important than ever. Because as the tools evolve, someone needs to ask the hard questions, make the ethical decisions, and understand the trade-offs. Someone needs to ensure we’re not just building fast, but building well.
If you’re a developer wondering where you stand in all this, I’d say this: don’t rush to keep up with every trend. Stay curious. Stay sharp. Don’t give away your thinking to tools that haven’t earned your trust. Learn how the systems work. Build your intuition. And when you use AI, use it like you’d use a smart assistant—not a replacement for your brain.
We’re at a unique point in history. There’s opportunity here, and also responsibility. Let’s not forget what it means to build software with care, with intention, and with a mind fully engaged.