So my first thought is that performance measurement tools seem exactly aimed at a lot of security problems but performance people are extremely reluctant https://aus.social/@brendangregg/110276319669838295 to admit that because of the drama involved in the security market. Which is very smart of them! :)
Secondly, I wanted to re-link to Halvar's QCon keynote https://docs.google.com/presentation/d/1wOT5kOWkQybVTHzB7uLXpU39ctYzXpOs2xVyD4zuYXY/edit#. He has a section on the difficulties of getting good performance benchmarks, which typically you would do as part of your build chain. So in theory, you have a lot of compilation features you can twiddle when compiling and you want to change those values, compile your program, and get a number for how fast it is. But this turns out to basically be impossible in the real world for reasons I'll let him explain in his presentation (see below).
[image: image.png]
A lot of these problems with performance seem only solvable by a continuous process of evolutionary algorithms - where you have a population of different compilation variables, and you probably introduce new ones over time, and you kill off the cloud VMs where you're getting terrible performance under real-world situations and let the ones getting good or average performance thrive.
I'm sure this is being done, and probably if I listened to more of Dino dai Zovi's talks I'd know where and how, but aside from having performance implications, it also has security implications because it will tend towards offering offensive implants value for becoming less parasitic and more symbiotic.
-dave
dailydave@lists.aitelfoundation.org