As you know, humans like to invent comfort words. One of my favorites is
"luck". The theory being that yes, the universe has dice, but they are
loaded in your favor. Properly used, these words are a spell - they allow
us to have courage when a sober mind would quail. But when you become a
professional, you have to give up these crutches. Only poor poker players
believe in "luck".
In computer science, and especially in machine learning, one of these words
you have to give up is "consciousness", which makes a lot of the current
discussion on AI feel pompous and unbearable. Just because you can feel
something doesn't make it real. The other day someone said to me, "I don't
know why, but it FEELS true to me" and I haven't been able to get that out
of my head since then.
In security we are often challenged to reject the assumptions a system is
built on so we can develop a third sight - in some cases clear enough you
gain the ability to predict the future. The raw fuel of this is a vast
reliance on an information torrent that to any outside party looks like
addiction.
I'm by no means an expert on war, conflict, or the Middle East, but what
Israel has done to Hezbollah is stunning to most of the experts, and in my
opinion, it's because they missed a major shift in warfare that was largely
driven by *Java Middleware*, of all things. So many of us lived through the
transformation brought about by precision GPS and ISR-based air power. This
much is obvious to the crowd that writes articles in war journals. Air war
is cool in a very teenage way, and drone movies have a certain visceral
feel to them that academics like to write about because it says to the
world that they are the kind of hard-core experts that understand the
blood-and-guts reality of modern combat. You know what's not sexy? Java.
But what is network centric warfare
<https://en.wikipedia.org/wiki/Network-centric_warfare#:~:text=Network-centr…>
really if not endless rooms full of programmers having meetings about Java
Beans and SOAP contracts. What does it look like when a real team goes
after your entire society and builds surveillance into everything? It's
hard to move in the modern world without a wake billowing behind you into
the ether. Al Quada learned this the hard way, and now I think we can see
what it looks like applied to another decentralized covert organization.
In the end, it's not the drones or the missiles that make the
difference—it's the quiet, relentless infrastructure underneath it all,
overlooked by those too focused on the shiny surface of modern warfare. The
real revolution isn’t in the explosions but in the code. The experts who
missed this shift weren’t outmaneuvered by luck or brute force; they were
outmatched by those who embraced the unglamorous, intricate work of
building a machine that sees all and remembers everything. The future of
conflict, like security and AI, won’t belong to those dazzled by the
spectacular. It will belong to those who understand that, more often than
not, the real power lies in the invisible.
-dave
People doing software security often use LLMs more as orchestrators than
anything else. But there's so many more complicated ways to use them in our
space coming down the pipe. Obviously the next evolution of SBOMs
<https://www.cisa.gov/resources-tools/resources/cisa-sbom-rama> is that
they represent not just what is contained in the code as some static tree
of library dependencies, but also what that code does in a summary fashion
that you can check once you get the final binaries. In a certain sense, you
can think of this as a behavioral attestation between the software
publisher and the consumer who is actually running the product.
In other words, if my product is meant to connect to WWW.SPYWARE.RU, then
it should say so in the SBOM behavioral manifest. But of course in practice
these things get quite complicated, and hence you need to summarize
semi-structured data (aka, the behavioral manifest is rarely exact), and
then compare it to what is seen when the software itself is run (which if
you've ever run strace ...is voluminous). That smells like a job for an
LLM, or at the very least, a vector comparison. Likewise, automatically
building harnesses to run and capture security sensitive information (or
performance information as we learned from XZ), is rapidly also becoming a
job <https://google.github.io/oss-fuzz/research/llms/target_generation/>
for an LLM.
I perhaps am channeling everyone else's
<https://www.cisa.gov/speaker/allan-friedman> worry that too much of the
SBOM community is arguing about which XML fields belong in a VEX addendum,
rather than pushing the concepts forwards to actually solve problems. Or
perhaps not! At some level, the software vendors are getting dragged
through this process by their hair, which is very fun to watch.
-dave