As AI systems become embedded in society, a new question is emerging: can they improve our lives beyond technical and creative tasks? Can they help humans as a race to make better decisions, make us less selfish, and foster better cooperation?
A recent study by researchers Arend Hintze and Christoph Adami explores exactly this question in their paper, “Promoting Cooperation in the Public Goods Game Using Artificial Intelligence Agents,” published in npj Complexity.
The tragedy of the commons
The tragedy of the commons is an economic theory where individuals, in a shared and limited resource environment, overuse and deplete the resource, resulting in the suffering of the entire group. TedEd has a good video explaining this theory, which I recommend you watch. To test their theory of whether AI can improve cooperation between humans, the researchers used a well-known cooperation experiment often described as a “public goods” game.
In this experiment, players can either contribute to a shared pool that benefits everyone or keep their contribution to themselves. While the group does best when everyone contributes, each individual can hold back and enjoy the shared reward. Humans alone didn’t fare well in this test and acted in self-interest, and not as part of the group. The researchers then introduced AI agents in the mix.
In the first scenario, the AI agents were programmed to always cooperate. That sounds promising, but it didn’t change human behavior. People continued acting in their own interest. Simply adding “good” actors to the system wasn’t enough. In the second scenario, players could control the AI agents. As you can imagine, this backfired. Players set the AI to cooperate while choosing not to cooperate themselves, outsourcing good behavior while maximizing personal gain.
The third scenario showed promising results. The AI agents mimicked the behavior of the players they interacted with. If a person cooperated, the AI cooperated. If the person acted selfishly, the AI mirrored that choice. This created a powerful feedback system, where human cooperation was rewarded with AI cooperation. This led to improved cooperation between human players.
What does all this have to do with self-driving cars?

While the study is limited and simplified to have a real-world effect, researchers said this study can be applied to several scenarios, including self-driving cars. Autonomous cars, for example, could be designed to reward cooperative driving and not follow strict rules. If enough self-driving cars adopt this feature, they could create a positive feedback loop that benefits everyone.
AI cannot magically eliminate selfishness. However, it can provide enough incentives to make cooperation the smarter choice, especially in the case of EVs. Findings published in the Transportation Research journal also propose an integrated system for routing and coordinated movement of idle vehicles to best serve passengers. Another research published in the Robotics journal proposes a collision-free tracking and visual connectivity system between self-driving vehicles.
This principle could also be used to schedule charging for self-driving electric cars so that they can avoid long waits and stress on the power grid, as detailed in this paper. AI systems, including chatbots such as ChatGPT and Gemini, already follow a reward-based system to learn and improve their performance, and it seems the system could very well solve real-world robotaxi problems, as well, when they slowly enter the mainstream.
