What the next five to ten years ask from anyone trying to stay useful in the AI era.
Originally published on Lei Hua’s Substack.
This coda follows the four-question template from
references/future_projection.md:
(1) the enduring core → (2) current pressures → (3) 3–5 likely decision points → (4) the questions facing this archetype.
Honest Disclaimer
This section is structured extrapolation from public materials — not prediction. If you re-read this in two years, some of these projections will not have happened — that’s expected. The purpose of this book is not to bet on the future, but to give you a lens for watching change.
Treat this as a meditation, not a forecast.
I. The Core We Have Already Established
After chapters one through six, four things can be said with reasonable confidence to be Karpathy’s true and stable core — they survived the entire 2022–2026 turbulence unchanged.
-
Minimalism, readability, the demystification of the training stack. From nanoGPT to nanochat to microGPT — three generations of “the most complete thing in the fewest lines of code.” It is a technical aesthetic and a moral posture: he refuses to let frontier models look like magic.
-
The dignity of education. Across all of his public work, this is the most monotonically strengthening thread. Eureka Labs is not a business; it is the material form of this line.
-
An allergy to hype. As early as State of GPT in 2023 he was saying “low-stakes + human-in-the-loop”; the 2025 “slop” and “march of nines” are the same caution at higher volume.
-
A preference for open and pluralistic ecosystems. 2024’s “coral reef” became 2026’s “build RL environments in verifiable domains labs haven’t claimed.” The romance turned into tactics, but the anti-concentration, anti-monopoly underlay never moved.
Any projection about his next moves must respect these four. He will not suddenly join a frontier lab. He will not suddenly become a hype machine. He will not put down education to return to pure research. He will not sit quietly in a world swallowed by five mega-corps.
II. Forces Currently Pressing on That Core
In the last 12 to 18 months, he has been publicly responding to at least four specific pressures:
-
The scaling problem of agentic engineering — agents now write 80% of code, but jagged intelligence makes the remaining 20% especially expensive. The trade-off between the quality bar and the speed bar is something he faces every day. This pressure was repeatedly touched on in Sequoia 2026.
-
Productization pressure on Eureka Labs — the LLM-101-N course, announced in 2024, was still not widely launched by 2026. Education is slow; but slow and dead are separated only by a single string of product rhythm. As a founder, he must make Eureka a company, not just a mission statement.
-
The cost of public language — the Dwarkesh episode positioned him as “the popper of the AI bubble,” a label he explicitly rejected in his X clarification, but could not fully control. As a public thinker, he must choose: continue to speak sharply and accept being flattened, or soften the edge and lose the singular position of an internal critic.
-
Further reshaping of his personal working style — he has already conceded “AI psychosis.” As agents increasingly write code and conduct research, the pleasure of being an independent thinker is itself being altered. This is an existential pressure no one else can think through for him.
III. 3–5 Likely Decision Points in the Next 5–10 Years
Each decision point is anchored in the core + the current pressures above. They are not predictions. They are the choices people of his archetype are most likely to face.
Eureka Labs’ shape
What it is: Eureka Labs aims to deliver “a 1-on-1 AI tutor for every student” — he learned Korean this way, he told Dwarkesh. To turn that experience into a scalable, durable company, he must choose among three shapes: a B2C mass product, a B2B school/enterprise sale, or a high-end tool around his own course content.
Why it’s likely: The unresolvable tension in the business model of education companies is something every education founder faces. His expressed preference leans B2C, but B2C education’s customer acquisition cost is notoriously high.
Possible directions: (a) B2C at mass scale, requiring a new teaching economics; (b) becoming a “tool company” for his own courses — small but durable; (c) selling to education departments or big platforms while retaining course IP.
Signals to watch: Does Eureka raise funding rounds? At what valuation? Does it begin hiring sales/partnerships staff beyond enrollment?
How to manage relations with frontier labs
What it is: He called frontier model code “slop” on Dwarkesh. But his own next-stage research (AutoResearch, microGPT) still depends on frontier models. As an independent educator-and-internal-critic, how does he handle his dependence on the very things he criticizes?
Why it’s likely: OpenAI / Anthropic / Google are simultaneously his tools and his targets. The tension will keep accumulating.
Possible directions: (a) stay permanently independent, pay API fees, criticize publicly; (b) form a “critic-allied” relationship with one specific lab; (c) shift toward an open-source / open-weights model ecosystem as his working foundation.
Signals to watch: Does he start recommending open-weights models more explicitly? Does he join any lab’s advisory board? How does he interact in public with Sutskever / Dario / other frontier lab leadership?
Whether to accept another “inside” role
What it is: He has left OpenAI twice. He probably believes he won’t return — but the boundary between independent educator and frontier researcher is blurring. If a lab tomorrow offered him a senior role on education/alignment/interpretability research, while letting him keep Eureka Labs, what would he do?
Why it’s likely: Historically he has done “in-out-in” twice (Stanford → OpenAI → Tesla → OpenAI → Eureka). The pattern is not definitively over.
Possible directions: (a) decline, citing Eureka’s need for full-time attention; (b) accept a part-time / advisor role; (c) accept a formal role with the right to leave.
Signals to watch: Does his commentary on internal lab work change in tone? How dense is his collaboration with any specific lab?
Finding a sustainable work ethic at the cost of “AI psychosis”
What it is: He has admitted that “sixteen hours a day directing my will at agents” leaves him in a mild psychotic state. That state is not sustainable on a two- or five-year horizon. He must either find a new equilibrium with agents, or consciously slow down.
Why it’s likely: Any way of working reshaped by a new tool needs a stable equilibrium. Otherwise the cost spills from the psychic into the physiological.
Possible directions: (a) discover a rhythm that alternates agent-driven work with deep human thinking; (b) push 16 hours back to 8, accepting less output; (c) publicize the question itself, making “human mental health while working with agents” a research theme inside Eureka.
Signals to watch: Does he start writing about work ethics in his blog or interviews? Does his own output rhythm visibly slow or restructure in 2027–2028?
How his judgment will change if AGI actually arrives
What it is: He currently says “AGI is still a decade away.” If a genuine capability leap occurs before 2030 — say, a model that approaches human level across nearly all verifiable domains — how will he recalibrate? This is the harshest test for a public thinker: when your prediction is wrong, what do you do?
Why it’s likely: Not because he is certain to be wrong — he might not be. But he must preserve a posture for elegantly admitting error.
Possible directions: (a) publicly acknowledge the timeline error in an “I was wrong” blog post (which fits his honesty); (b) redefine the term so he isn’t “wrong” (which doesn’t fit his honesty); (c) further split his judgment — “core intelligence has arrived, but AGI as economic impact is still in the march of nines.”
Signals to watch: His first reaction, in any blog / X post in the first week after a major capability leap. That post will be the most important next primary source for this biography.
IV. People of His Archetype — And You
If you have read this far, this section is for you.
People of Karpathy’s archetype share a recognizable pattern: technical insiders who refuse a purely technical identity; public speakers who refuse hype; one eye on the code, the other eye on education, ethics, ecosystem. This type will not disappear in the AI era — they will become more important, because they are translators.
But this type will also face, in the next 5–10 years, a handful of nearly unavoidable shared questions:
Question 1: When new tools invert your way of working, how do you keep authorship? Karpathy reported “agents write 80%” — that’s a question about working style, but it’s also a question about identity. Anyone whose self has been built on making things by hand must, once agents take over, answer again: “What am I actually doing now?”
Question 2: How do you hold the middle, between hype and denial? After Dwarkesh, Karpathy was pushed toward the “bubble-popper” label, which he himself refused. But holding the middle requires re-clarifying every few months — a continuous tax on attention. Anyone trying to be a “sober insider” will pay this tax.
Question 3: When your field moves faster than you, where does your dignity come from? Not from outrunning the tool — you can’t. It comes from how you live with the tool — replaced by it, or extending yourself with it. Karpathy chose the second, at the cost of his own “AI psychosis.” That path is yours to walk too.
Question 4: Will education really become “the gym”? Or will it disappear? Karpathy’s core bet is the former — that post-AGI education resembles today’s gym: for fun, for health, for self-dignity. But this bet might not hold. Education could also become a luxury good, a class marker, something out of reach for ordinary people. The fundamental question for educators of his generation is whether the “gym future” of education is actually an egalitarian future.
The Question Left on the Table
If only one question can be left at the end of this book, for you to carry back into your own life, it is this:
In a world moving faster than you, what kind of person are you willing to become?
Karpathy’s answer is a specific posture — keep working, keep recalibrating, publicly admit what you got wrong, and don’t surrender the core inside you that hasn’t changed. That is not the answer. It is one version of the answer.
The point of this book is not to persuade you to agree with him. It is to invite you, with the same clarity — same lack of drama, same honesty, same refusal of both the hype glasses and the denial glasses — to think about your own next step.
May you, in this era, recalibrate gracefully.
Sources
This coda draws on all the materials cited in the previous six chapters. Its judgments are induced from those materials alone, with no new external sources introduced. For verification, see the sources list at the end of each chapter.
