Rust in Linux: Fixing the Edges, Not the Core

In 2022, Linus Torvalds made the decision to merge Rust support into the Linux kernel – Version 6.1 was the first release to include the new Rust infrastructure. This was not a rejection of C, nor an attempt to rewrite the kernel. C remains the kernel’s foundation. What changed was a recognition that the kernel’s security risks were not coming primarily from its core, but from the vast and expanding perimeter code written by humans under real-world constraints. As the kernel has grown, developers have had to track memory allocations and deallocations manually across complex, asynchronous, and hardware-driven paths — a task that is error-prone even for highly experienced programmers.

The fundamental reason the Linux kernel has never used a Garbage Collector (GC) for memory management comes down to the requirement for deterministic performance. In a kernel environment, the CPU must respond to hardware interrupts and manage system resources with microsecond precision. A GC introduces “Stop-the-World” events where the system pauses to scan and clean memory, which would cause unacceptable latency and potential system failure. Consequently, the kernel has historically relied on the manual management of memory using C functions like kmalloc and kfree.

Importantly, these core memory management subsystems are not where most exploitable bugs arise. The page allocator, slab/SLUB, virtual memory layer, and MMU interaction code are extremely complex, but also centralized, heavily audited, and maintained by a small group of specialists. Decades of scrutiny have made changes rare and always approached with caution. Bugs do occur there, but they are comparatively uncommon and system-wide when they do.

Most memory-safety vulnerabilities instead originate in code that uses these allocators: device drivers, networking stacks, filesystems, and other edge subsystems. This code is sprawling, hardware-specific, and often written by vendors or occasional contributors. Under C, the compiler cannot enforce correct object lifetimes across callbacks, interrupts, and asynchronous paths. The result is a steady stream of use-after-free, double-free, and invalid aliasing bugs, which have historically accounted for a large share of serious kernel exploits.

Under this manual system, the responsibility for memory safety rests entirely on the human programmer. Every allocation must be perfectly matched with a deallocation, a task that becomes exponentially difficult as kernel code grows in complexity. Errors in this process lead to catastrophic vulnerabilities like double-frees or use-after-free exploits, which have historically accounted for the majority of severe kernel security flaws.

Rust addresses these risks by moving memory management logic from the human mind to the compiler. Through its unique system of ownership, borrowing, and lifetimes, Rust ensures that memory is handled correctly before the code even runs. Unlike the Go runtime, which relies on a heavy background process to manage memory, Rust achieves this safety at compile time with zero runtime overhead. This allows it to function in the freestanding kernel environment just as efficiently as C while providing a mathematical guarantee against the most common types of human error.

Currently, the kernel community is taking a surgical approach to implementation by using Rust primarily for new device drivers and abstractions. The core subsystems, such as the process scheduler and memory management units, remain in C due to their extreme complexity and decades of optimization. This hybrid model allows the kernel to evolve toward a safer future without the impossible task of rewriting millions of lines of existing, functional C code.

Ben Santora – February 2026

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

2026 Quality Plant of the Year: The Power of the Right People

Related Posts