Reputation-based Interaction Promotes Cooperation with Reinforcement Learning

Published in IEEE Transactions on Evolutionary Computation (TEVC), 2023

In this paper, we introduce an innovative methodology for studying multi-agent Reinforcement Learning (MARL) systems and shows the intricacies of cooperative behavior emergence, with potential applications in both human society and artificial intelligence. This work has a twofold contribution:

  • Innovative Approach: It introduces a novel approach using RL to analyze interaction intensity in decentralized multi-agent systems. Combining RL with Evolutionary Game Theory (EGT) proves highly effective in promoting cooperation.

  • Insights into Cooperation: The study explains why assortative mixing patterns occur in self-organizing populations with reputation-based interactions. This sheds light on the mechanisms driving cooperative behavior in multi-agent systems, with implications for both human societies and artificial intelligence.

Download paper from IEEE or download the Accepted Author Manuscript.

Code for reproducing the experimental results presented in the paper can be found from this repository.

I’m excited to share that this represents my initial public contribution from my PhD research, and I’m eagerly looking forward to receiving any feedback!

Recommended citation: T. Ren and X.J. Zeng, “Reputation-based Interaction Promotes Cooperation with Reinforcement Learning,” IEEE Transactions on Evolutionary Computation, vol. 28, no. 4, pp. 1177-1188, 2024.

Recommended citation: T. Ren and X.J. Zeng, "Reputation-based Interaction Promotes Cooperation with Reinforcement Learning," IEEE Transactions on Evolutionary Computation, vol. 28, no. 4, pp. 1177-1188, 2024.
Download Paper