MITRE ATK > CVE/CVSS
Enterprise v8 is more granular than ever before for vuln purposes, but always has been extensive for threat purposes

If you want to express CVEs in maldocs or malware (including webshells) may I suggest Yara and/or Suricata (maybe shortcuts such as JA3 or JARM if TLS applies)?
If you want to express CVEs in runtime app infra may I suggest caldera_pathfinder? e.g., this is heartbleed -- https://github.com/center-for-threat-informed-defense/caldera_pathfinder/blob/master/payloads/heartbleed.py
There are a few corner cases where a sandbox, debugger, or other frame of reference is necessary to make the cyber risk use case from a threat-vulnerability scenario visible -- and in those cases, I think we can still package all of this up in STIX 2.1 and/or MISP JSON, right?

It's my strong opinion that none of the above need or want CVSS or any variation of Risk=ProbImpactHamsterdamage. Better to autoenrich every single field from VIA4CVE (e.g. -- http://cve.circl.lu/cve/cve-2014-0160 ) and cvedetails except the CVSS-related ones lol
It's my strong opinion that there must be a single place to go to grab CVE-related mappings to/from actors with those Yara/Suricata/JA3/JARM/Caldera/cvesearch/VIA4CVE/cvedetails STIX 2.1 / MISP JSON indicator bundles


On Tue, Jan 5, 2021 at 10:00 AM Chuck McAuley via Dailydave <dailydave@lists.aitelfoundation.org> wrote:

Throughput* is perhaps the wrong unit of measure. Most of the time you would be interested in measuring “requests/second” or “transactions/second”. Aside from say a content ingesting site/repeater (facebook/twitter/instagram), almost all content for a WAF to handle is inbound, using low amounts of available bandwidth. The outbound content is rarely inspected by such a device, with the exception of 5xx error or similar (headers).

 

A colleague pointed out you are missing the fact that if the WAF is oversubscribed, you will miss attacks. They are related to the other sets of metrics, dependent on the level of performance you desire. However, it is important to score them in isolation as well, since you need to understand the value of protection outside the scope of resource contention. Typically we would encourage users to set the performance targets they expect, and then test the protection capabilities of said solution, be it intrusion prevention, WAF, firewall state tracking, whatever. Then iteratively increase said performance testing until the device would reach a failure point in terms of performance or security protection objectives.

 

-chuck

 

 

* throughput has a technical definition of “the fastest rate at which the count of test frames transmitted by the DUT is equal to the number of test frames sent to it by the test equipment.” (RFC 2544). It’s used for switches and routers. No one cares anymore, but hey, I hold a torch for it. The term “goodput” is stuff meat of what you care about (webpages, documents, whatever).

 

 

From: Dave Aitel via Dailydave <dailydave@lists.aitelfoundation.org>
Reply-To: Dave Aitel <dave.aitel@gmail.com>
Date: Tuesday, January 5, 2021 at 9:46 AM
To: "dailydave@lists.aitelfoundation.org" <dailydave@lists.aitelfoundation.org>
Subject: [Dailydave] The Lost Decade of Security Metrics

 

[EXTERNAL]  

A thousand years ago I subscribed to the Security Metrics mailing list. Metrics are important - or rather, I think good decision making is important, and without metrics your decision making is essentially luck. But we haven't seen any progress on this in a decade, and I wanted to talk about the meta-reason why: Oversimplification in the hopes of scaling. 

 

There's a theme in security metrics, a deep Wrong, that the community cannot correct, of trying to devolve features in their datasets to a single number. CVSS is the most obvious example, but Sasha's VEP paper here (https://www.lawfareblog.com/developing-objective-repeatable-scoring-system-vulnerability-equities-process) demonstrates most clearly the categorical example of the oversimplification issue, one that all of FIRST has  seemingly fallen into. 

 

If I took all the paintings in the world, and ran them through a neural network to score them 1.0 through 10.0, the resulting number would be, like CVSS, useless. Right now on the Metrics mailing list someone is soliciting for a survey where they ask people how they are using CVSS and how useful it might be for them. But the more useful you think CVSS is for you, the less useful it actually is being, since it can only lead you to wasting the little security budget you have. CVSS is the phrenology of security metrics. Being simple and easy to use does not make it helpful for rational decision making. 

 

If we want to make progress, we have to admit that we cannot join the false-positive and false-negative and throughput numbers of our WAF in any way. They must remain three different numbers. We can perhaps work on visualizing or representing this information differently, but they're in different dimensions and cannot be combined. The same is true for vulnerabilities. The reason security managers are reaching for a yes/no "Is there an exploit available" metric for patch prioritization is that CVSS does not work, and won't ever work, and despite the sunk cost the community has put into it, should be thrown out wholesale.

 

-dave

_______________________________________________
Dailydave mailing list -- dailydave@lists.aitelfoundation.org
To unsubscribe send an email to dailydave-leave@lists.aitelfoundation.org