So my first thought is that performance measurement tools seem exactly aimed at a lot of security problems but performance people are extremely reluctant to admit that because of the drama involved in the security market. Which is very smart of them! :)

Secondly, I wanted to re-link to Halvar's QCon keynote. He has a section on the difficulties of getting good performance benchmarks, which typically you would do as part of your build chain. So in theory, you have a lot of compilation features you can twiddle when compiling and you want to change those values, compile your program, and get a number for how fast it is. But this turns out to basically be impossible in the real world for reasons I'll let him explain in his presentation (see below). 

image.png

A lot of these problems with performance seem only solvable by a continuous process of evolutionary algorithms - where you have a population of different compilation variables, and you probably introduce new ones over time, and you kill off the cloud VMs where you're getting terrible performance under real-world situations and let the ones getting good or average performance thrive. 

I'm sure this is being done, and probably if I listened to more of Dino dai Zovi's talks I'd know where and how, but aside from having performance implications, it also has security implications because it will tend towards offering offensive implants value for becoming less parasitic and more symbiotic.

-dave