Foreboding AI, One Year Later: What Are We Really Building?

A year ago, I wrote Foreboding AI: The Inevitable Collapse We’re Funding Ourselves

At the time, my concern was that we were paying for the very systems that could eventually replace us. We were subscribing to tools that learnt from our prompts, our corrections, our habits, our work, and our impatience.

A year later, I still think that concern was valid.

But my worry has changed shape.

It is no longer only about whether AI can answer better, write better, summarise better, draw better, or code better.

The deeper question now is this:

What happens when AI stops simply answering and starts acting?

For years, most people understood AI as a chatbot.

You asked a question.

  • It gave an answer.

  • Maybe it helped write an email.

  • Maybe it explained a piece of code.

  • Maybe it generated an image.

That felt powerful, but still contained.

Now we are moving into something different.

AI systems can now:

  • browse websites,

  • read files,

  • inspect codebases,

  • run terminal commands,

  • call APIs,

  • edit code repositories,

  • use external tools,

  • remember context,

  • and work through multi-step tasks.

That is not just a better chatbot.

That is a system with hands.

And once AI has hands, the questions become much harder:

  • What can it read?

  • What can it change?

  • What can it delete?

  • What can it install?

  • What secrets can it see?

  • What systems can it touch?

  • Who is watching it?

  • Who is responsible when it gets something wrong?

And maybe the hardest question of all:

Are we giving AI responsibility before we have built enough human restraint around it?

From Helper to Worker

One of the biggest changes I have observed over the last year is how quickly AI development tools have shifted from assistance to delegation.

Not long ago, AI coding tools were mostly autocomplete and chat-in-the-editor helpers. They could suggest a function, explain a bug, or produce a snippet.

Useful? Absolutely.

But the human still clearly owned the work.

Now we have tools that can:

  • read a codebase,

  • interpret an issue,

  • make a plan,

  • edit files,

  • run tests,

  • inspect errors,

  • try again,

  • and sometimes open a pull request.

That changes the relationship.

The developer is no longer only asking for help.

The developer is assigning work.

That sounds exciting, and in many ways it is. I use AI. I build with it. I research it. I am not looking at this from the outside as someone who hates technology.

In fact, the kind of technology I have always cared about most is assistive technology: tools that simplify life, remove friction, bridge gaps, and help people accomplish things they otherwise could not.

Good technology should make hard things feel possible, especially for people who do not think of themselves as technical.

That is the version of AI I want to believe in:

  • AI that assists.

  • AI that teaches.

  • AI that explains.

  • AI that helps a small business do more.

  • AI that helps a non-technical person solve a real problem.

  • AI that gives someone with limited time, money, confidence, or access a way forward.

That kind of AI is worth building.

But assistive is not the same as autonomous.

A tool that helps a person see more clearly is different from a tool that makes the decision for them.

A tool that helps a junior developer understand a bug is different from a tool that fixes it while the junior learns nothing.

A tool that helps a business owner draft a policy is different from a tool that quietly makes operational decisions nobody reviews.

A tool that helps a developer move faster is different from a tool that produces code nobody truly owns.

That is the line I keep coming back to:

Is AI helping people become more capable, or is it making human capability easier to bypass?

Because I keep coming back to one uncomfortable observation:

AI is being packaged as a worker.

And if AI is being packaged as a worker, what happens to human workers?

More specifically, what happens to the people who are still trying to become useful?

What Happens to Juniors?

This part is personal for me.

A few years ago, I was teaching web development to students across the entire state of New South Wales. I was helping people learn:

  • HTML,

  • CSS,

  • JavaScript,

  • PHP,

  • databases,

  • debugging,

  • structure,

  • and the slow, messy process of becoming a developer.

Back then, the path was difficult, but it made sense.

  • You learnt the basics.

  • You built small things.

  • You broke them.

  • You fixed them.

  • You asked questions.

  • You slowly became better.

That beginner stage mattered.

Junior developers have always learnt through repetitive, messy, beginner-level work. That work is not glamorous, but it is where judgement is built.

You become senior by first being junior.

But I now wonder whether AI is quietly breaking that pathway.

A company may look at a junior developer and see:

  • cost,

  • training time,

  • supervision,

  • mistakes,

  • slower output,

  • and delayed return on investment.

Then it may look at a senior developer with AI agents and see leverage.

Why hire and train a junior, the logic goes, when a senior developer can orchestrate AI agents to:

  • write boilerplate,

  • generate tests,

  • fix simple bugs,

  • refactor files,

  • document code,

  • scaffold CRUD screens,

  • and perform the work juniors used to learn from?

That question should make the industry uncomfortable.

Because if we remove junior work, we do not just remove junior jobs.

We remove the training ground.

So we need to ask ourselves honestly:

  • If AI writes the first draft, who learns to write?

  • If AI fixes the simple bugs, who learns to debug?

  • If AI generates the CRUD screens, who learns application structure?

  • If AI builds the first API, who learns validation, security, request handling, and failure modes?

  • If AI explains the code before the junior has struggled with it, who develops the instinct to know when something feels wrong?

  • If AI does the repetitive work, where do beginners build fluency?

  • If juniors are not given time to be slow, how do they ever become fast?

  • If we stop investing in people who are not yet productive, where do future seniors come from?

These are not theoretical questions.

They are workforce questions.

They are education questions.

They are industry survival questions.

The Gap Between Output and Understanding

AI can produce code faster than many beginners can understand it.

That creates a dangerous gap between output and understanding.

A senior developer has years of scars to draw from.

They have:

  • debugged strange production issues,

  • untangled legacy systems,

  • chased broken database queries,

  • fixed bad deployments,

  • dealt with edge cases,

  • and learnt that code is not just syntax.

Code is architecture.

Code is trade-offs.

Code is history.

Code is pressure.

Code is users.

Code is consequences.

A junior developer does not have that experience yet.

So what happens when large amounts of code are generated with minimal oversight?

What happens when a junior is asked to maintain code they did not design, do not fully understand, and could not have written themselves?

What happens when the AI-generated solution works just well enough to ship, but not well enough to be understood?

When it breaks:

  • Who debugs it?

  • Who owns it?

  • Who can explain it to the business?

  • Who can see the hidden security issue?

  • Who notices the bad abstraction?

  • Who spots the fragile dependency?

  • Who understands the poor database design?

  • Who recognises the edge case waiting quietly in production?

This is where I think we need a new phrase.

We already understand technical debt.

Technical debt is what happens when we ship code quickly today and leave future maintainers to pay the cost later.

But AI is creating another kind of debt.

Cognitive debt.

Cognitive debt is what happens when we let AI do the thinking today and leave future developers without the understanding they need tomorrow.

Every time a developer:

  • accepts code they do not understand,

  • skips the debugging process,

  • avoids reading the documentation,

  • lets an agent make architectural decisions without challenge,

  • or ships output without comprehension,

…a little more cognitive debt is created.

The codebase may grow.

The sprint may look successful.

The product may ship.

But the human understanding behind the system shrinks.

And that should worry us.

Because when something goes wrong, and it will, the organisation does not just need code.

It needs people who understand the code.

So the hard question is this:

Are we becoming more productive, or are we becoming less capable?

Assistive AI Versus Autonomous AI

This is the distinction I think we need to make more clearly.

Assistive AI keeps the human in the centre.

Autonomous AI can quietly move the human to the edge.

Assistive AI says:

Here is a possible answer. Let me help you understand it.

Autonomous AI says:

I have done it for you.

Assistive AI can be empowering. It can help someone learn faster, communicate better, build something they could not build alone, or make complex systems easier to use.

Autonomous AI can also be useful. There are tasks where automation makes sense. Nobody needs to manually repeat pointless busywork forever just to prove they are human.

But the risk is not automation itself.

The risk is automation without:

  • understanding,

  • accountability,

  • apprenticeship,

  • oversight,

  • restraint,

  • or ownership.

Automation becomes dangerous when it removes the human from the learning loop and then acts surprised when the human no longer understands the system.

That is why I think the question should not simply be:

Can AI do this?

The better question is:

What happens to the human when AI does this?

  • Do they learn more?

  • Do they understand more?

  • Do they become more capable?

Or do they become a supervisor of systems they no longer understand?

That difference matters.

Because assistive technology should lift people up.

It should not quietly hollow them out.

Are We Getting Faster, or Just Lazier?

This is not a comfortable thing to ask, but I think we have to ask it.

  • Is AI making us better?

  • Or is it making us impatient?

  • Are we learning more?

  • Or are we reading less?

  • Are we building deeper understanding?

  • Or are we accepting answers because they look confident?

  • Are we using AI to extend our judgement?

  • Or are we using it to avoid judgement?

There is a difference between a tool that strengthens a person and a tool that slowly replaces the person’s effort to think.

A calculator does not remove the need to understand maths.

A GPS does not remove the value of knowing where you are.

A spellchecker does not remove the need to communicate clearly.

And an AI coding agent should not remove the need to understand software.

But that depends on how we use it.

If AI helps us learn, challenge assumptions, test ideas, and understand systems faster, it can be powerful.

If AI becomes a shortcut around thinking, it becomes dangerous.

Not because the machine is evil.

Because dependency is quiet.

It does not arrive all at once.

It creeps in through convenience:

  • one generated function,

  • one skipped explanation,

  • one accepted pull request,

  • one unread dependency,

  • one command run without understanding it,

  • one agent given permission because it is easier than saying no.

Eventually we may look around and realise we are surrounded by systems we can operate but no longer deeply understand.

Cybersecurity Is the Clearest Warning Sign

The most alarming change in the last year is cybersecurity.

A year ago, many AI security fears were mostly framed around:

  • phishing emails,

  • fake voices,

  • deepfakes,

  • and low-quality malware generation.

Those risks are real.

But they are only the shallow end of the pool.

The deeper concern is that AI can help attackers:

  • understand software,

  • search for weaknesses,

  • automate reconnaissance,

  • generate exploit ideas,

  • refine malicious code,

  • and reduce the time between finding a vulnerability and weaponising it.

That does not mean every criminal suddenly has a magic zero-day machine.

It means the timeline is compressing.

And in cybersecurity, compressed timelines are dangerous.

AI does not need to invent cybercrime.

It only needs to accelerate it.

So we should ask:

  • What happens when attackers use AI faster than defenders can respond?

  • What happens when exploit research becomes cheaper?

  • What happens when scams become more personalised?

  • What happens when malicious code can be generated, tested, rewritten, and translated at scale?

  • What happens when the same agentic tools that help developers also help attackers?

The answer is not panic.

But it is not complacency either.

Mythos Was a Warning Label

Anthropic’s decision to restrict access to Claude Mythos Preview matters.

Mythos was not released broadly as a normal public model. It was placed behind a controlled defensive programme.

That should make people pause.

When an AI company decides one of its own models may be too capable in cybersecurity to release casually, that is not science fiction.

That is a warning label.

The concern is not that Mythos is evil. That is the wrong framing.

The concern is capability.

A model that can deeply understand software, find subtle flaws, and help produce working exploit paths is valuable for defenders.

It is also valuable for attackers.

In my view, restricting broad access was the responsible choice.

But that creates another difficult question:

  • If only large companies and approved partners get access to the strongest defensive AI, what happens to everyone else?

  • What happens to small businesses?

  • What happens to open-source maintainers?

  • What happens to underfunded public institutions?

  • What happens to ordinary developers trying to secure ordinary systems?

We need defensive AI.

But we also need fair access, strong controls, auditability, and serious responsibility around who gets to use cyber-capable models and how.

The Software Supply Chain Is Becoming a Battlefield

Modern software is built on trust.

We trust:

  • packages,

  • maintainers,

  • GitHub Actions,

  • CI/CD pipelines,

  • Docker images,

  • package registries,

  • lockfiles,

  • tokens,

  • and that the thing we installed is the thing we think we installed.

That trust was already fragile.

Now add AI agents.

An AI-powered development environment may have access to:

  • source code,

  • terminals,

  • package managers,

  • Git history,

  • private repositories,

  • local files,

  • environment variables,

  • npm tokens,

  • GitHub tokens,

  • cloud credentials,

  • SSH keys,

  • and connected tools.

That is a lot of power in one place.

Recent npm supply-chain incidents should be treated as early warning signs.

The important point is not that every attack was written by AI. Many were not.

The important point is convergence.

A poisoned package does not need to be created by AI to become more dangerous in an AI-driven development world. It only needs to land inside an environment where agents have tools, context, permissions, and access to secrets.

So ask yourself:

  • Do you know what your dependencies are doing?

  • Do you know what your post-install scripts are running?

  • Do you know which tokens exist on your machine?

  • Do you know what your AI tools can read?

  • Do you know what they can execute?

  • Do you know what would happen if a malicious README, GitHub issue, package, or webpage gave your agent instructions?

The developer machine used to be a workstation.

Now it is becoming an AI-operated control room.

And if attackers compromise the control room, they may not just steal code.

They may hijack the systems that understand the code.

The Exploit May Look Like Language

Software worms are not new.

But AI gives the old idea new soil.

Traditional worms spread through software vulnerabilities. They scanned, infected, replicated, and moved on.

AI-agent attacks may spread differently.

They may move through instructions:

  • a poisoned README,

  • a malicious GitHub issue,

  • a fake support ticket,

  • a compromised npm package,

  • a rogue MCP server,

  • a hidden instruction inside a webpage,

  • or an email an AI assistant processes without realising it has been tricked.

That is why prompt injection matters.

A chatbot with no tools can give a bad answer.

An agent with tools can take a bad action.

And an agent with broad permissions can leak data, expose secrets, install packages, modify code, or trigger workflows.

The exploit may not look like code.

It may look like language.

So the question becomes:

  • Are we securing the model?

  • Or are we securing the whole environment around the model?

Because in the agent era, the model is only one part of the risk.

  • The tools matter.

  • The permissions matter.

  • The memory matters.

  • The files matter.

  • The logs matter.

    The approval gates matter.

  • The human judgement around it matters most of all.

MCP Gives AI Hands — But Hands Need Rules

The Model Context Protocol, or MCP, is one of the most important developments in the agent ecosystem.

It allows AI systems to connect to tools and data sources in a standard way.

That is powerful.

It also increases the blast radius.

An AI agent may be able to interact with:

  • GitHub,

  • databases,

  • file systems,

  • browsers,

  • documentation,

  • business tools,

  • calendars,

  • CRMs,

  • cloud systems,

  • deployment tools,

  • and internal APIs.

That makes AI more useful.

It also makes mistakes more serious.

Every organisation adopting agentic AI should be asking:

  • What can the agent read?

  • What can it write?

  • Can it execute commands?

  • Can it access secrets?

  • Can it install packages?

  • Can it trigger deployments?

  • Is every action logged?

  • Who approves high-risk actions?

  • What happens when it is tricked?

If those questions feel boring, that is exactly why they matter.

Security usually fails in the boring places.

Cheap Intelligence Changes the Equation

AI is not only getting smarter.

It is getting cheaper.

DeepSeek, Qwen, Kimi, GLM, MiniMax, and other Chinese models have shown that strong AI capability is no longer limited to a few Western labs.

This matters because AI risk is not only about the most powerful model.

It is about how widely that power spreads.

Once intelligence becomes cheap, it scales.

Once it becomes open, it diffuses.

Once it diffuses, governance becomes much harder.

That does not mean open models are bad. Open models can be excellent for privacy, competition, research, accessibility, and local control.

But we need to be honest about the trade-off.

Open capability empowers everyone.

Including people who should not be empowered.

So we need to ask:

  • What happens when powerful reasoning becomes cheap?

  • What happens when coding agents become ordinary?

  • What happens when cyber capability spreads faster than cyber maturity?

  • What happens when every scammer, spammer, attacker, propagandist, and desperate business has access to cheap automation?

Again, the answer is not to ban the technology.

But pretending there is no trade-off is dishonest.

The Real Danger Is Not One Evil AI

I do not think the most immediate danger is a single self-aware machine waking up and deciding to destroy humanity.

That makes good cinema.

It is not the most likely near-term threat.

The real danger is messier and more boring:

  • millions of semi-autonomous systems,

  • connected to real tools,

  • given unclear permissions,

  • deployed by companies under pressure,

  • used by criminals at scale,

  • integrated into insecure software,

  • trusted by people who do not understand them,

  • and regulated by governments moving too slowly.

That is the threat.

Not one evil machine.

A thousand careless integrations.

A million cheap automations.

A civilisation slowly outsourcing judgement before it has decided what judgement must remain human.

Questions We Should Be Asking Now

I do not think the answer is to ban AI.

That is not realistic.

I also do not think the answer is to blindly adopt it everywhere because everyone else is doing it.

That is reckless.

Instead, I think we need to start asking harder questions before convenience makes those questions impossible to hear.

  • Are we using AI to become more capable, or simply more dependent?

  • Are we teaching people to think with AI, or teaching them to stop thinking?

  • Are we still training juniors, or quietly replacing their learning path?

  • Are senior developers using AI to mentor and explain, or just to produce more output?

  • Do we understand the code we ship?

  • Do we understand the dependencies we install?

  • Do we understand the permissions we give agents?

  • Are our approval processes real, or are humans just rubber-stamping AI output?

  • Are we measuring productivity while ignoring comprehension?

  • Are we creating technical debt?

  • Are we creating cognitive debt?

  • Are we building systems people can maintain, or systems people can only prompt?

  • Are we making humans more powerful?

  • Or are we making humans less necessary?

Those questions are uncomfortable.

They should be.

Final Thought

A year ago, my warning was that we were funding our own replacement.

I still believe that.

But the warning has grown.

We are not only funding replacement.

We are building systems that act.

We are connecting them to tools.

We are giving them credentials.

We are letting them:

  • write code,

  • browse the web,

  • speak in our voices,

  • generate reality,

  • and enter our workplaces, schools, homes, software, supply chains, and security systems.

And we are doing it at speed.

That speed is the problem.

Human institutions move slowly.

Laws move slowly.

Education moves slowly.

Ethics moves slowly.

Security moves slowly.

Culture moves slowly.

AI does not.

Foreboding AI was never about hating technology.

I am a developer. I use technology. I build with it. I research it. I have spent decades trying to make technical systems simpler, more useful, and more accessible for real people.

I believe in assistive technology.

I believe in tools that help people do more, understand more, and overcome barriers.

But powerful tools require powerful restraint.

We should use AI. But we should not worship it.

We should build with it. But we should not surrender to it.

And above all, we should remember that humanity’s greatest strength was never efficiency.

It was judgement.

So maybe the question is not whether AI can replace us.

Maybe the question is whether we will slowly hand over the parts of ourselves that made us worth replacing.

One subscription.

One prompt.

One generated answer.

One unchecked permission.

One piece of judgement at a time.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Notion just turned its workspace into a hub for AI agents

Related Posts