I wanted everyone to browse here and enjoy this Microsoft Teams vulnerability: https://github.com/oskarsve/ms-teams-rce/blob/main/README.md

I also enjoy the discussion it has engendered when it comes to how to measure vulnerabilities that are "in the cloud" or via "Auto-update". It would be good to get clarity on these things.
image.png

Measurement is the first step of something else: intermediate analysis. I think we failed, as a community, when we accepted the premise that vulnerabilities could be flattened down to simple numbers, CVSS scores, VEP scores, whatever. Bugs are inherently complex and interlinked. Losing that is losing their essence - you lose the ability to think coherently about them. 

But if you follow any set of scoring guidelines for vulnerabilities, and the best ones are qualitative, like the Pwnie Awards, you know that even though a massive amount of effort has gone into mitigation, assessment, secure coding frameworks, education, and everything else that makes up the meta-SDL, we are flooded with bugs. The mitigations aren't working. The secure coding frameworks, aren't. For every bug we find and fix a dozen more are written by the developers we thought we trained. 

It is a natural response to try to hide from this knowledge of failure. To cook the CVE numbers. To take refuge in our stock prices. Let's write another blogpost about catching an APT and give it a funny insulting nickname. 

Unfortunately without intermediate analysis you cannot do higher level strategy. And the treadmill of the information security technology arena is beyond exhausting. An equally fast treadmill is running next to it for security policy and legal policy and another one for incident response. There's no intermediate analysis happening in any area, so we are left making strategy choices by random chance or luck or the occasional herculean effort. 

-dave