Breaking down a prompt into multiple steps works pretty well for us. e.g. first we get generic mean reasons:

image.png

Then I just shove the mean reasons into the system message (you can do this with another LLM call instead in real life, I just cheated by copy pasting since there's already too many screenshots in this email):

image.png

This is with gpt-4o-2024-05-13 above, but you can see below it works with Llama 3.1 405B Instruct too:

image.png