The rapid adoption of AI-assisted coding is reshaping how software is produced and deployed. In 2026, AI-assisted coding is no longer confined to startups and hobbyists. It is embedded across the software industry, from companies like Google and NVIDIA to financial institutions, cloud providers, and increasingly, government agencies. AI systems are used to generate boilerplate, refactor legacy code, write tests and in general, accelerate development. AI assistance is now simply part of the standard development environment, not a separate workflow.
But as security expert David Mytton (founder and CEO of developer security provider Arcjet) and many other have argued, this shift introduces a new category of risk – that risk being driven not necessarily by bad code than by insufficient understanding of what is being shipped.
The problem is not that AI-generated code typically fails outright. It often appears correct enough to deploy while quietly embedding assumptions, edge cases, or security weaknesses that have not been examined. AI-assisted coding often “works,” but its ‘correctness’ is inferred from surface behavior rather than established through strong constraints or deep review. When applied to security-sensitive or foundational systems, this can have serious consequences.
Even Linus Torvalds has said that he feels AI-assisted coding can be appropriate for prototypes, experiments and to help beginning coders get started. Risk escalates when this relaxed approach is applied indiscriminately to production systems, where dangerous failures and security risks may be created – these can be silent and cumulative.
These risks are well-known and widely discussed everywhere. Still, AI-assisted coding makes invention cheap and persuasive, and there are no signs that will be abandoned. It seems inevitable that unverifiable designs will slip into production as a matter of course going forward.
Programming language choice can significantly help mitigate this risk and act as an effective constraint. Dynamic, permissive languages such as Python, JavaScript, Ruby, and PHP sit at the high-risk end. They allow code to run with minimal upfront validation, pushing most errors to runtime. With AI in the loop, this makes it easy to deploy code that appears correct but fails under specific conditions. JavaScript adds additional risk through implicit coercions, complex asynchronous behavior, and a vast dependency ecosystem.
C and C++ present a different but equally serious risk. Although compiled and statically typed, they do not enforce memory safety or prevent undefined behavior. This is no secret and is discussed often. But C is everywhere and must be dealt with – a skilled and seasoned C developer will know to carefully scan code for this type of error. Without the oversight of such a person, AI-generated code may compile cleanly. yet contain latent vulnerabilities such as buffer overflows or use-after-free bugs that can surface long after deployment.
Languages like Java, C#, and Go occupy a middle ground, enforcing stronger structure and surfacing more errors early, but they can still allow logic and security flaws to pass through compilation.
At the lower-risk end are languages such as Swift and Kotlin, which enforce stronger type systems, null-safety, and safer defaults than older languages.
But many would agree that Rust may be the most risk-averse language available. Its notoriously strict compiler enforces invariants around memory safety, lifetimes, and concurrency that cannot be bypassed accidentally, forcing many classes of errors to fail early and visibly. While this does not entirely prevent logic or design mistakes, it substantially reduces silent failure.
I have some experience with Rust and its compiler’s ability to provide useful and actionable corrections in its error messages. It is indeed more difficult to get bad code through that compiler. But it can be done. As part of my recent work – testing and pushing the limits of large language models, I asked KIMI to do just that – to create a section of code with flaws that would get by the compiler and it was able to do so. I’m not skilled enough to understand if a skilled and diligent Rust developer would have caught such an error – probably.
The broader lesson is that AI-assisted coding is not inherently reckless, but it is viable only where constraints are strong. The proper environment must be provided – strong compilers, comprehensive testing, limited permissions, and meaningful review by skilled human developers. In such an environment, AI-assisted coding can be both productive and relatively safe. As I conclude with all of my other testing involving AI, it’s crucial to keep humans in the loop.
Ben Santora – January 2026