Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs
Large Language Models excel on many tasks but struggle with moral reasoning and ethical decision-making, particularly when multiple stakeholders are involved. We introduce SKIG, a framework that simulates accountability alongside empathy and risk assessment to improve decision-making. Across moral reasoning benchmarks using proprietary and open-source LLMs, SKIG yields stronger, more aligned decisions. Ablations highlight the importance of its core components for multi-stakeholder alignment.