A thousand years ago I subscribed to the Security Metrics mailing list. Metrics are important - or rather, I think good decision making is important, and without metrics your decision making is essentially luck. But we haven't seen any progress on this in a decade, and I wanted to talk about the meta-reason why: Oversimplification in the hopes of scaling.
There's a theme in security metrics, a deep Wrong, that the community cannot correct, of trying to devolve features in their datasets to a single number. CVSS is the most obvious example, but Sasha's VEP paper here ( https://www.lawfareblog.com/developing-objective-repeatable-scoring-system-v...) demonstrates most clearly the categorical example of the oversimplification issue, one that all of FIRST has seemingly fallen into.
If I took all the paintings in the world, and ran them through a neural network to score them 1.0 through 10.0, the resulting number would be, like CVSS, useless. Right now on the Metrics mailing list someone is soliciting for a survey where they ask people how they are using CVSS and how useful it might be for them. But the more useful you think CVSS is for you, the less useful it actually is being, since it can only lead you to wasting the little security budget you have. *CVSS is the phrenology of security metrics.* Being simple and easy to use does not make it helpful for rational decision making.
If we want to make progress, we have to admit that we cannot join the false-positive and false-negative and throughput numbers of our WAF in any way. They must remain three different numbers. We can perhaps work on visualizing or representing this information differently, but they're in different dimensions and cannot be combined. The same is true for vulnerabilities. The reason security managers are reaching for a yes/no "Is there an exploit available" metric for patch prioritization is that CVSS does not work, and won't ever work, and despite the sunk cost the community has put into it, should be thrown out wholesale.
-dave