I guess some of us who grew up mapping ports and protocols into their neat buckets will need to live with that fact that everything will eventually ride over a multiplexed 443 socket, just something to think about before the rant.
TL;DR - The answer to your question about measurement and effectiveness is going to come down: "how long before you can see what I'm doing".
WAF's are a rather complex beast, but I guess they do deserve a look from time to time as I am not sure who is actually doing the hard work of judging their effectiveness. If you are doing that work, resist the urge to just push everything to Graphana as if a fancy graph will make us happy. Quickly, you may find that the complexity of testing any number of WAF's is to find what is good represented battery of tests are. This is what makes something like MITRE ATT&CK interesting because you are framing a series of techniques which can be modified and changed somewhat dynamically. Testing a WAF then may become a more simplified affair, instead of having to construct hundreds of potential scenarios. Your favorite WAF may miss the injection technique du-jour because it may be running all of its normalizations through an old IDN library and misses your attack string.
Either way, for anyone else's consideration; from what I gather most commercial WAFs have taken one of three approaches. Firstly, they are using a very simple blacklist approach typically its that of using a simple ModSecurity CRS with over the top signatures. You can probably guess which ones do that, and there are a few commercial WAFs that still do this and it does prevent some simple attacks. Some of these WAFs are trying to be even fancier by 'dynamically creating signatures based on attack patterns' which they may call M/L. The second approach is that of a training WAF, this approach has the other classic problem that is, I will use some static blacklists like User-Agent SQLMAP == bad, but I will raise you 'learning what is good in your app'. This approach has been interesting because what if you just learned badness? Or what if your application is just terrible, such as an application I used a few months ago in which, hilariously, constructed Java functions in the browser as a response and sent into the web app. I don't know why, I guess so I can { java.lang.math } while I fill out this web form. The final approach and maybe the more sane approach in some respects(?) is to treat the WAF as a library in the application to introspect things. This idea is probably not the most novel as companies like newrelic seem to do this fairly well for performance. Maybe in this way you can try and build enough telemetry to make better decisions about the state of good or bad. In addition we can now 'share data' amongst a larger number of people and in this way make global decisions based on large datasets. Either way, as it comes to metrics and testing, having a solid test bed and strategy always seems to be the hardest part of the equation not just metrics.
What is even more interesting is probably what the WAFs real threat model is becoming today. While most ops folks maybe worried about a ransomware payload or a crypto miner payload, the real driver many times are bots. Folks that are coming in and scraping web data in order to resell, replace, come in cheaper, or what have you. This is in theory not illegal which is even more interesting because it comes down to mining and abusing pseudo disclosed datasets.
-M
On Sat, Jul 11, 2020 at 12:44 PM Dave Aitel via Dailydave < dailydave@lists.aitelfoundation.org> wrote:
So I'm making a video on metrics, of all things, and I wanted to post both this question https://twitter.com/daveaitel/status/1281629327776522242?s=20and the best answer so far to the list to see if anyone had any other ideas or followups.
-dave
[image: image.png]
[image: image.png] _______________________________________________ Dailydave mailing list -- dailydave@lists.aitelfoundation.org To unsubscribe send an email to dailydave-leave@lists.aitelfoundation.org