How would one actually move the actual bar in defense? A big part of me thinks that you're just not going to patch your way out of the problem. But the number of organizations that you can rely on to actually make a difference seems pretty small? Like even converting every Linux binary to rust would only make sense if you could find a team that could actually maintain and support that code base, which I don't know that you could.
Like in a sense, what you have to do is completely rebuild how you're building software and have the large language model be the intermediary for everything?
Dave
Reduce complexity, duplication, and scope in your infrastructure. Your developers and infrastructure staff would need to agree on standardized libraries, frameworks, etc, and you'd need skilled technical staff to validate when people said doing something wasn't possible within that scope, and make them accountable for making sure adding that level of complexity led to business value that was greater than that overhead (vs, say, just getting them promoted for rolling out *cool new framework)*. Once you have done that, you can then start evaluating infrastructure growth rate vs security evaluation rate to consistently maintain your defensive bar
Unfortunately, society's incentives broadly agree that reducing scope or investing in security is unacceptable because it interferes with rocketship growth. Please accept these 2 years of free credit monitoring in exchange.
As to fixing it by putting an LLM in as the intermediary for your software development process - well, LLMs are complex and opaque and any security practitioner should know that in complex and opaque systems there is always interesting exploitable behavior. It also has the bonus of enabling people with less expertise to commit more code faster, which probably interferes "improving your comprehension of what your infrastructure is doing" and "reducing the scope of evaluation space you need to address". There's also that little pesky question of "What value did that LLM intermediary provide? Does it cost more to secure it than value it provides?". Most of the industry seems to have a very tenuous grasp on the first one, the second one? No one knows and it's a sin to ask.
On Sat, Nov 15, 2025 at 11:56 PM Dave Aitel via Dailydave < dailydave@lists.aitelfoundation.org> wrote:
How would one actually move the actual bar in defense? A big part of me thinks that you're just not going to patch your way out of the problem. But the number of organizations that you can rely on to actually make a difference seems pretty small? Like even converting every Linux binary to rust would only make sense if you could find a team that could actually maintain and support that code base, which I don't know that you could.
Like in a sense, what you have to do is completely rebuild how you're building software and have the large language model be the intermediary for everything?
Dave _______________________________________________ Dailydave mailing list -- dailydave@lists.aitelfoundation.org To unsubscribe send an email to dailydave-leave@lists.aitelfoundation.org
On Sun, Nov 16, 2025 at 10:16 AM Dave Aitel via Dailydave dailydave@lists.aitelfoundation.org wrote:
How would one actually move the actual bar in defense? A big part of me thinks that you're just not going to patch your way out of the problem. But the number of organizations that you can rely on to actually make a difference seems pretty small? Like even converting every Linux binary to rust would only make sense if you could find a team that could actually maintain and support that code base, which I don't know that you could.
Like in a sense, what you have to do is completely rebuild how you're building software and have the large language model be the intermediary for everything?
Imbalances in the skills and workforce are real. The gap remains hard to bridge also in the presence of greater degrees of automation that AI buys us, because, at this stage, we want humans to be in the loop – and for good reasons – and, also, cause we are not going to grow the skillset faster than the attack surface, I am afraid.
I hate to sound like a broken record, but I will take a bite regardless: those imbalances are a byproduct of the information asymmetries that, from an historical point of view, have been favouring offense. To actually move the bar in defense, devising clever tech is not enough. Rather it entails aligning the incentives – here we are, I said it again.
Now, those of you that know me may be familiar with the market approaches I had attempted in my past life. But, more substantially today, the bar is being moved by the regulatory framework that is eventually maturing. First we do rethink the accountability, and liability, model in place, then the technical work can be sorted out. To be clear, it is not going to be free. But, vulnerabilities have been inflicting us a price to pay, and for a long time now. Hence, it is time that a matter of concern becomes which stakeholders will bear 'the real cost of insecure software' in the future.
-- Alfonso
(gingerly raises head above parapet)
Historically, “we’ve” moved the bar in defense.
- Everything is now in the cloud, accessible 24/7 via APIs whose keys are stored in plaintext alongside code, or via preauthenticated sessions - Everything has ~40 dependencies, each of which has ~40 dependencies, etc, which, combined with a published CVE rate of 1 per 15 minutes (calendar year 2024), means that patching an enterprise before an attacker has an available exploit is expensive (and despite being “measurable”, is unlikely to actually stop an attacker, because CVEs are very much not the only bugs) - Because everything is online now, everyone’s identity is online now, at the very least in the form of historic data breaches (free creds!) - Flawless real-time imitation of humans is now very much a thing, while defences against these attacks are largely absent - Vibe coding is manufacturing vulnerability at faster-than-human speeds
“We’ve” definitely moved the bar. Just not in a direction that’s helpful for the defender.
Having an LLM as the intermediary for everything seems dubious to me. When you’re going downhill, reach for the brake, not the accelerator. LLMs can help though, if used to generate code judiciously within a well structured software engineering framework, where we trade velocity of code creation for greater assurance, for instance, using the LLMs to:
- Generate test cases for everything, integrate them into the pipeline and make sure they’re compulsory - Detect known-bad security patterns - Carry out variant analysis of known-bad security cases - Verify that logging and error handling code is solid - Always nudge toward reducing the amount of code, and reducing - rather than increasing - technical debt
Outside of being careful with the code generation, the LLMs can help with defensive analysis and providing challenge (e.g. they’re surprisingly helpful with threat modelling) and by creating the intermediaries you mentioned, provided (obvs) we’re very, very careful not to create confused deputies, vulnerable admin agents with “forever” admin creds, or “straddling” services that provide a helpful bridge for attackers between logical areas of the system (network, vpc, container/orchestration etc).
I think we’ve now settled to a point in our level of vulnerability where national governments are having to step in with regulation; ransomware is routine, an existential threat to companies and a matter of life and death for the public sector.
We (on this list) already know how to fix this, I think. The “old” defences work, with (imo) empowered, semi-automated vigilance being the best defence. LLMs give defenders strong advantages, but only if the talent, the money and the executive understanding is there. It mostly isn’t there.
I think the answer is societal, rather than technical. We know the technical answers, but they don’t stick, because of the perverse incentives.
This is why I’ve been moving toward regulation (along with others, looking at you Dan Cuthbert). It’s hard, dull, minimal, and takes ages, but it scales. The market will eventually catch up and surface all of this, but for now, we need hackers with clue contributing to the regulatory effort, to keep it reasonable and sane, while giving a measurable improvement. I think that’s how we move the bar in favour of defence, and thank you for coming to my TED talk.
-chris
On 15 Nov 2025, at 21:58, Dave Aitel via Dailydave dailydave@lists.aitelfoundation.org wrote:
How would one actually move the actual bar in defense? A big part of me thinks that you're just not going to patch your way out of the problem. But the number of organizations that you can rely on to actually make a difference seems pretty small? Like even converting every Linux binary to rust would only make sense if you could find a team that could actually maintain and support that code base, which I don't know that you could.
Like in a sense, what you have to do is completely rebuild how you're building software and have the large language model be the intermediary for everything?
Dave _______________________________________________ Dailydave mailing list -- dailydave@lists.aitelfoundation.org To unsubscribe send an email to dailydave-leave@lists.aitelfoundation.org
I like the idea of having a software supply chain that people can pay into that basically funds a universal bug bounty system for anything that matters.
You can put systems in place that utilize zero knowledge exploitability proofs to automate bounty triage, so it doesn't even need to be run by a central trusted entity. As the bounty markets stabilize, what you're left with is a software ecosystem where anyone can build what they need and directly query the estimated cost of attack from point A to point B on any set of capabilities, and any security claim "Your emails are safe with Microsoft" etc can actually be economically quantified. Hosting providers can use their subscription income to pay into the bounty funds of the parts of the supply chain they rely on, thus making their services more attractive to users (and bug hunters).
On the other side of this, you now have a world of vuln researchers and their pet LLMs grinding and searching away for unexplored attack paths they can cash in on. Of course these bounty systems can also work for optimization bounties for people making code faster, or feature bounties. Some kid somewhere has an idea for a feature in some piece of software that they're using, so they post about it, and a few thousand people chip in, and when the bounty becomes appetizing enough, someone's AI pet grabs it, and they get paid, and within minutes the update is deployed into the ecosystem.
Then, on everyone's device, depending on their risk tolerance and their use case, the AI can decide if this new update is supported enough by the ecosystem yet to apply. Maybe we don't apply it now, but maybe in 30 mins if no one has found anything weird in it. This is the dream, right? Fully automated self improving, self healing software ecosystem where researchers can get paid without even needing to talk to anyone :-D
- DEAN
On Sat, Nov 15, 2025 at 6:32 PM Dave Aitel via Dailydave < dailydave@lists.aitelfoundation.org> wrote:
How would one actually move the actual bar in defense? A big part of me thinks that you're just not going to patch your way out of the problem. But the number of organizations that you can rely on to actually make a difference seems pretty small? Like even converting every Linux binary to rust would only make sense if you could find a team that could actually maintain and support that code base, which I don't know that you could.
Like in a sense, what you have to do is completely rebuild how you're building software and have the large language model be the intermediary for everything?
Dave _______________________________________________ Dailydave mailing list -- dailydave@lists.aitelfoundation.org To unsubscribe send an email to dailydave-leave@lists.aitelfoundation.org
dailydave@lists.aitelfoundation.org