Can a hamster do interprocedural analysis? What size of hamster can turn a tier-2 geopolitical adversary's cyber force into a tier-1 adversary? Is the best use of a hamster finding 0day or orchestrating the offensive operations themselves? These are all great questions for policy teams to ponder and they pontificate over how to properly regulate AI. 

On one hand, as a technologist, your tendency will be to try to explain to policy teams what makes a scary adversary scary, maybe get involved in building a taxonomy of various tiers of adversary, start classifying operations as "sophisticated" and "not sophisticated". This is not useful, but it feels useful! It is like recycling cardboard boxes, all while knowing that you, as an organism, are primarily oriented towards boiling the oceans and turning the planet into Venus as quickly and efficiently as possible. Remember that today's small stone crab claw is tomorrow's "extra large" stone crab claw, because all the big ones got eaten and that's how generational amnesia works! 

In other words, while STORM-0558's operation against Microsoft was slick like oil across the ever-more-hot waters of the Gulf of Mexico when it happened, the million teams doing the exact same Active Directory tricks the next month were just small fish, despite their impact on targets. And most big-impact operations could have been done by second-tier penetration testing teams, let alone nation-state adversaries. You will, if you work in Cyber Policy long enough, see people make tables of operations which compare various hacks from over the last fifteen years, which is like comparing the bite strengths of a Cretaceous monster to your average modern iguana. 

Likewise, most of Policy-world is, like we all are, obsessed with 0day. We like to count them with the enthusiasm of a vampire puppet on a children's TV show! But we also know that finding 0day is not a sign of sophistication so much as finding the right 0day at the right time. I don't know how to classify Orange Tsai's PHP character innovation but because it doesn't fit neatly into a spreadsheet, it might as well not exist? 

"If AI finds 0day, then it must be regulated" is a fun position to take in the many fancy halls and tedious Zoom calls where a pompous attitude and an ill-fitting suit are table stakes for attendance and having actually written code that uses Huggingface is considered bad form. But regulating technologies that can find 0day is a dead end. The current best way to find 0day is fuzzers and the dumber they are, the better they work most of the time. When it comes to operations, the current best way to hack is to email people and ask them for their password? Is that still true? Or do we just all look through huge databases of usernames and passwords that have already leaked and just use those now? I'm sure AI can also do that, but I'm also sure that it doesn't matter.

I try to tell policy people this: What makes a scary adversary is huge resources, huge motivation, and huge innovation. The big names will probably use AI to automate different things, but nobody is going to saddle up some hamsters and suddenly turn from a not-scary adversary into a scary one.

-dave