I designed one of the first working fuzzers (albeit unintentionally) back in the late 90's. I don't remember if I published it, but I still have the code. It, however, worked - badly - but it worked. I was heavily flamed, however, because as you stated - it was not hip. It only attacked environment variable and command-line argument based vulnerabilities. But, in the 90's and early 00's, we had no shortage of local suid-based flaws. I never augmented it for one key reason, despite having accidentally intuited the advantages of automation in exploit dev.
xdr, who I considered a good friend at the time (Jah bless him) made a point that actually took me away from pursuing further development of this app: I did not yet properly understand the underlying fundamentals of CPU/OS architecture that resulted in exploitable conditions or successful exploitation. He was right. Even if I wanted to improve the app, I really didn't understand how to achieve the increasingly difficult goal I was aiming for. So, rather than writing an automation script that helped me skip over /the hard details/ I focused on learning the science I was trying to ignore. That really made the difference for my career, but I should have returned to improving the automation.
I tell this short story for a reason... I wonder sometimes if we are doing the same thing with LLMs (et al). We are building sophisticated and complex - but exciting - systems whose internals are often mysteries to us. I respect that there is a race to Better Than, but I wonder if the result will leave us somehow less sophisticated and more dangerously dependent on the LLM than if we truly and thoroughly understood the mechanics of the underlying magic of LLM systems. Every time I talk to a sophisticated engineer of these modern monoliths and they relate to me how they are still unsure how/why LLMs work or improve internally... I am not marveling at the miracle, but rather cautiously cringing...
D
On Sat, Jan 11, 2025 at 3:23 PM Dave Aitel via Dailydave < dailydave@lists.aitelfoundation.org> wrote:
Memories and thoughts are the same thing, someone tried to explain to me recently. You have to think to remember, in other words. This is hard to grasp for a lot of people because they *think *they have *memories*. They wrongly think memory is a noun instead of a verb, which is ok in philosophy and psychology but in cutting edge computer science we have to be precise about these sorts of things.
Twenty-five years ago, when I first started writing fuzzers, a full quarter century, people thought it was an absolutely stupid thing to do. The smart people were using their giant brains to do static analysis. They were tainting and sinking. They were reading the code and finding flaws. They did threat models. They did not write glorified for loops that made different amounts of A's go into different RPC functions. But I had the hubris of a teenage hacker, and I thought it was fun. More fun, perhaps, than reading code.
In 2025, fuzzing is part of the software development lifecycle for any organization rich enough to call a hyperscale datacenter home. It is a *sine qua non* for secure software. Fuzzing, we now understand, is *reasoning*. And if you can't reason over your code, you can't secure it.
Part of the value is that fuzzing echoes machine learning in that it scales nicely with the amount of CPU you could use. And there's no false positives when you measure whether an input crashes a program - it either does or it does not.
There are downsides of course - many inputs may cause the same crash. Fuzzing identifies a flaw exists, but it doesn't tell you what the flaw actually is. And fuzzing often finds enough flaws that development teams become overwhelmed with triage. And of course, fuzzing can often be too dumb to reach the important bugs, since it is exploring the space of possible inputs semi-randomly, even with coverage guided analysis.
We (as a community) tried to correct these things with SMT solvers, or smarter fuzzers. But now we have a new tool: LLMs, which reason in a very different way. But still they *reason*.
Admittedly, there are many disbelievers. "LLMs just repeat what they are trained on" and taken to an extreme that's true but that's also true for any of us. In practice, they reason perfectly well. And not too long from now, maybe a couple years at most, any organization that is not using them widely for security engineering is left behind the curve - the same way teams not using fuzzers are today.
Memories and thoughts are, in essence, the same thing because both require the act of reasoning. In computer science, fuzzing and LLMs are tools that embody this principle. They don't passively store knowledge - they actively explore, test, and refine it.
When I first started fuzzing, it was dismissed as a foolish endeavor because it didn’t look like traditional reasoning. Now, it’s indispensable. LLMs are on a similar path: misunderstood by some, but already reshaping how we approach security.
Just as fuzzing forced us to rethink what reasoning over code looks like, LLMs are forcing us to rethink reasoning itself. In both cases, the act - not the object - is what matters. They are the root of the root and the bud of the bud - the foundation of what comes next. And if you don’t carry this forward, you risk being left behind in a world that’s growing beyond you. -dave
Dailydave mailing list -- dailydave@lists.aitelfoundation.org To unsubscribe send an email to dailydave-leave@lists.aitelfoundation.org