So one of my new fav questions to ask policy teams is what they would do if
they were told to switch their offensive team entirely to worms. Nothing
else. Just worms. What needs to change to make that happen - from op tempo
to supply chain to personnel to policy and technological investment.
And how would their defensive team need to change strategically if they
were facing such an offensive team.
It's a fun thing to see people wrap their minds around. :)
Also, if you missed it, yesterday's CYBER HOT TAKES are here:
Recently I read this post from Maddie Stone of Google's Project Zero:
. In particular, it has a bolded line of "*As a community, our ability to
detect 0-days being used in the wild is severely lacking to the point that
we can’t draw significant conclusions due to the lack of (and biases in)
the data we have collected.*" which is the most honest thing I've read from
the defensive community in a long while. Like I feel like it's a good idea
to have as a reflexive habit the concept of "What am I looking directly at
that I'm not seeing."
As a kid I was obsessed with various elements of biology, despite not
having the grades to show for it. But as an adult I wish I could go back in
time and just blow my own mind with a few short things I've learned. Most
of them are obvious in retrospect, such as the following:
- Birds are dinosaurs
- Genes sometimes travel in-between species, carried by bacteria that
infect both of them
- 40% of all animals are parasites
- Metabolism (and cells) evolved before DNA
- Energy Epochs
useful predictive tools
I mean, for most people on this list the same thing is true for hacking.
For me these things might include:
- State tables are more important than memory handling
- Timing attacks are impossible to explain to people, so they never get
- Attack tools tend towards generics
- It doesn't matter if they catch you, if they won't ever do the
meta-analysis to put the larger picture together
A thousand years ago I subscribed to the Security Metrics mailing list.
Metrics are important - or rather, I think good decision making is
important, and without metrics your decision making is essentially luck.
But we haven't seen any progress on this in a decade, and I wanted to talk
about the meta-reason why: Oversimplification in the hopes of scaling.
There's a theme in security metrics, a deep Wrong, that the community
cannot correct, of trying to devolve features in their datasets to a single
number. CVSS is the most obvious example, but Sasha's VEP paper here (
demonstrates most clearly the categorical example of the oversimplification
issue, one that all of FIRST has seemingly fallen into.
If I took all the paintings in the world, and ran them through a neural
network to score them 1.0 through 10.0, the resulting number would be, like
CVSS, useless. Right now on the Metrics mailing list someone is soliciting
for a survey where they ask people how they are using CVSS and how
useful it might be for them. But the more useful you think CVSS is for you,
the less useful it actually is being, since it can only lead you to wasting
the little security budget you have. *CVSS is the phrenology of security
metrics.* Being simple and easy to use does not make it helpful for
rational decision making.
If we want to make progress, we have to admit that we cannot join the
false-positive and false-negative and throughput numbers of our WAF in any
way. They must remain three different numbers. We can perhaps work on
visualizing or representing this information differently, but they're in
different dimensions and cannot be combined. The same is true for
vulnerabilities. The reason security managers are reaching for a yes/no "Is
there an exploit available" metric for patch prioritization is that CVSS
does not work, and won't ever work, and despite the sunk cost the community
has put into it, should be thrown out wholesale.