Memories and thoughts are the same thing, someone tried to explain to me recently. You have to think to remember, in other words. This is hard to grasp for a lot of people because they think they have memories. They wrongly think memory is a noun instead of a verb, which is ok in philosophy and psychology but in cutting edge computer science we have to be precise about these sorts of things. 

Twenty-five years ago, when I first started writing fuzzers, a full quarter century, people thought it was an absolutely stupid thing to do. The smart people were using their giant brains to do static analysis. They were tainting and sinking. They were reading the code and finding flaws. They did threat models. They did not write glorified for loops that made different amounts of A's go into different RPC functions. But I had the hubris of a teenage hacker, and I thought it was fun. More fun, perhaps, than reading code.

In 2025, fuzzing is part of the software development lifecycle for any organization rich enough to call a hyperscale datacenter home. It is a sine qua non for secure software. Fuzzing, we now understand, is reasoning. And if you can't reason over your code, you can't secure it.

Part of the value is that fuzzing echoes machine learning in that it scales nicely with the amount of CPU you could use. And there's no false positives when you measure whether an input crashes a program - it either does or it does not. 

There are downsides of course - many inputs may cause the same crash. Fuzzing identifies a flaw exists, but it doesn't tell you what the flaw actually is. And fuzzing often finds enough flaws that development teams become overwhelmed with triage. And of course, fuzzing can often be too dumb to reach the important bugs, since it is exploring the space of possible inputs semi-randomly, even with coverage guided analysis.

We (as a community) tried to correct these things with SMT solvers, or smarter fuzzers. But now we have a new tool: LLMs, which reason in a very different way. But still they reason

Admittedly, there are many disbelievers. "LLMs just repeat what they are trained on" and taken to an extreme that's true but that's also true for any of us. In practice, they reason perfectly well. And not too long from now, maybe a couple years at most, any organization that is not using them widely for security engineering is left behind the curve - the same way teams not using fuzzers are today.

Memories and thoughts are, in essence, the same thing because both require the act of reasoning. In computer science, fuzzing and LLMs are tools that embody this principle. They don't passively store knowledge - they actively explore, test, and refine it.

When I first started fuzzing, it was dismissed as a foolish endeavor because it didn’t look like traditional reasoning. Now, it’s indispensable. LLMs are on a similar path: misunderstood by some, but already reshaping how we approach security.

Just as fuzzing forced us to rethink what reasoning over code looks like, LLMs are forcing us to rethink reasoning itself. In both cases, the act - not the object - is what matters. They are the root of the root and the bud of the bud - the foundation of what comes next. And if you don’t carry this forward, you risk being left behind in a world that’s growing beyond you.

-dave