[image: image.png]
The security community (aka, all of us on this list) still rages with the impact of Jia Tan putting a sophisticated backdoor into the XV package, and all of the associated HUMINT effort that went into it. And I realized from talking to people, especially people in the cyber policy realm but also technical experts, about it that there's a pretty big gap when it comes to understanding why someone would put in a backdoor at all, instead of adding many bugdoors.
Some Background:
1. A post https://cybersecpolitics.blogspot.com/2019/05/hope-is-not-nobus-strategy.html on what NOBUS means when it comes to backdoors. 2. Responsible offense from a bunch of Americans https://www.lawfaremedia.org/article/responsible-cyber-offense 3. Responsible offense from the UK https://www.gov.uk/government/publications/responsible-cyber-power-in-practice/responsible-cyber-power-in-practice-html 4. Responsible offense from the Germans https://www.stiftung-nv.de/sites/default/files/snv_active_cyber_defense_toward_operational_norms.pdf 5. A university banned from Linux https://www.theverge.com/2021/4/30/22410164/linux-kernel-university-of-minnesota-banned-open-source for contributing backdoors as part of a research project
So as with all areas of responsible offense, there is a tight connection and contention between good OPSEC and responsible operations. In particular, it is very easy to get yourself on a team for a big project, and add code that introduces exploitable conditions, perhaps handles input in a way that causes a memory corruption, or does authentication slightly wrong in certain circumstances.
From an operational security standpoint, these bugdoors are easy to introduce, and I don't know of a serious hacking group that hasn't played with this - if for no other reason than to fix bugs that cause crashes while you are trying to exploit some other, better bug. Reading the original UMN paper, (which was under-appreciated for its time, despite getting them banned from Linux!) you can see that it is not really always about adding bugs, but often about adding enabling features for bugs that already exist in the codebase, making them more reachable.
In some ways, attacking the open source community by hacking into developers or repositories has been the traditional way of ancient Unix 90's hackers, who understand a web of trust the way a Polynesian navigator understands the swells and currents between islands.
From an opsec perspective though, bugdoors have limits. Fuzzers can find them, other hackers can find them, and once found, they can be used by anyone with the skill to write the exploit. Likewise, using them is risky: No memory corruption is 100% reliable, and when they fail, *they fail in the worst way, in the worst place, at the worst time*. Likewise, the traffic you may have to do to shape memory in the target host is likely to be anomalous, and easily signatured.
And from a responsible offensive cyber operation perspective, bugdoors cannot mathematically demonstrate that they can protect the hosts you target from third parties. *Bugdoors are never a NOBUS capability.*
Ideally a NOBUS capability would allow you and only you to get in and avoid replay attacks, but a close second is a simple asymmetric key of some kind where the target ID is used as part of the scheme. The XZ backdoor used an Elliptic Curve with a signature that included the target's SSH public key https://github.com/amlweems/xzbot.
Thanks, Dave
dailydave@lists.aitelfoundation.org