A research team explored the conditions under which people would accept the moral judgments of AI. They focused on the behavior of “not helping people with bad reputations (justified non-cooperation),” which is difficult for people to judge as good or bad, to investigate under what conditions people are more likely to accept AI’s judgments over human judgments.
The study revealed that people tend to be more accepting of AI’s judgments when AI makes positive judgments and humans make negative judgments. The research results were published in the journal Scientific Reports on 27 January 2025.
As AI technology becomes more integrated into daily life, understanding public acceptance of AI’s decisions is critical. Previous studies have shown that people often hold biases, such as “algorithm aversion” and “algorithm appreciation,” where they might distrust or over-trust AI.
However, this study addresses the less-explored scenario where individuals find themselves uncertain in their moral judgment, specifically in indirect reciprocity, where individuals decide whether to cooperate with others based on reputation.
The researchers conducted two experiments with Japanese participants, examining how they judged an AI manager’s decision compared to a human manager’s in a workplace scenario. The key findings showed that participants were more inclined to accept the AI’s decision when AI judged a non-cooperative action as positive (justified defection) and a human judged it as negative.
The results suggest that individuals may perceive AI’s judgment as more objective, especially when human judgments might be perceived as biased or driven by hidden intentions.
The findings contribute to a deeper understanding of the mechanisms behind people’s acceptance of AI in moral and social decision-making, highlighting the importance of context in shaping these perceptions. As society continues integrating AI into complex decision-making roles, such insights are essential for designing AI systems that align with human expectations and societal norms.
The research was led by Dr. Prof. Hitoshi Yamamoto of Rissho University and Dr. Prof. Takahisa Suzuki of Tsuda University.
More information:
Hitoshi Yamamoto et al, Exploring condition in which people accept AI over human judgements on justified defection, Scientific Reports (2025). DOI: 10.1038/s41598-025-87170-w
Provided by
Rissho University
Citation:
Accepting AI judgments on moral decisions: A study on justified defection (2025, January 30)
retrieved 30 January 2025
from https://phys.org/news/2025-01-ai-judgments-moral-decisions-defection.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.