Throughput* is perhaps the wrong unit of measure. Most of the time you would be interested
in measuring “requests/second” or “transactions/second”. Aside from say a content
ingesting site/repeater (facebook/twitter/instagram), almost all content for a WAF to
handle is inbound, using low amounts of available bandwidth. The outbound content is
rarely inspected by such a device, with the exception of 5xx error or similar (headers).
A colleague pointed out you are missing the fact that if the WAF is oversubscribed, you
will miss attacks. They are related to the other sets of metrics, dependent on the level
of performance you desire. However, it is important to score them in isolation as well,
since you need to understand the value of protection outside the scope of resource
contention. Typically we would encourage users to set the performance targets they expect,
and then test the protection capabilities of said solution, be it intrusion prevention,
WAF, firewall state tracking, whatever. Then iteratively increase said performance testing
until the device would reach a failure point in terms of performance or security
protection objectives.
-chuck
* throughput has a technical definition of “the fastest rate at which the count of test
frames transmitted by the DUT is equal to the number of test frames sent to it by the test
equipment.” (RFC 2544). It’s used for switches and routers. No one cares anymore, but hey,
I hold a torch for it. The term “goodput” is stuff meat of what you care about (webpages,
documents, whatever).
From: Dave Aitel via Dailydave <dailydave(a)lists.aitelfoundation.org>
Reply-To: Dave Aitel <dave.aitel(a)gmail.com>
Date: Tuesday, January 5, 2021 at 9:46 AM
To: "dailydave(a)lists.aitelfoundation.org"
<dailydave(a)lists.aitelfoundation.org>
Subject: [Dailydave] The Lost Decade of Security Metrics
[EXTERNAL]
A thousand years ago I subscribed to the Security Metrics mailing list. Metrics are
important - or rather, I think good decision making is important, and without metrics your
decision making is essentially luck. But we haven't seen any progress on this in a
decade, and I wanted to talk about the meta-reason why: Oversimplification in the hopes of
scaling.
There's a theme in security metrics, a deep Wrong, that the community cannot correct,
of trying to devolve features in their datasets to a single number. CVSS is the most
obvious example, but Sasha's VEP paper here
(
https://www.lawfareblog.com/developing-objective-repeatable-scoring-system-…)
demonstrates most clearly the categorical example of the oversimplification issue, one
that all of FIRST has seemingly fallen into.
If I took all the paintings in the world, and ran them through a neural network to score
them 1.0 through 10.0, the resulting number would be, like CVSS, useless. Right now on the
Metrics mailing list someone is soliciting for a survey where they ask people how they are
using CVSS and how useful it might be for them. But the more useful you think CVSS is for
you, the less useful it actually is being, since it can only lead you to wasting the
little security budget you have. CVSS is the phrenology of security metrics. Being simple
and easy to use does not make it helpful for rational decision making.
If we want to make progress, we have to admit that we cannot join the false-positive and
false-negative and throughput numbers of our WAF in any way. They must remain three
different numbers. We can perhaps work on visualizing or representing this information
differently, but they're in different dimensions and cannot be combined. The same is
true for vulnerabilities. The reason security managers are reaching for a yes/no "Is
there an exploit available" metric for patch prioritization is that CVSS does not
work, and won't ever work, and despite the sunk cost the community has put into it,
should be thrown out wholesale.
-dave