Towards a moral robot: Reward, effort, and risk distribution to humans following social norms - PEPR O2R
Pré-Publication, Document De Travail Année : 2024

Towards a moral robot: Reward, effort, and risk distribution to humans following social norms

Résumé

In this work, we explore the complex social norms underlying human decision-making in social scenarios and propose a machine learning model to replicate and understand these decisions. Focusing on the distribution of rewards, efforts, and risks between individuals, we conducted experiments involving 188 human participants in an online decision-making game. We then developed an XGBoost-based model to predict their decisions accurately. To assess the model's alignement with social norms, we conducted a Turing test which showed that our model was perceived as making morally acceptable decisions, similar to those of human participants. Furthermore, we embodied the model in a robot negotiator, to observe how participants perceived and accepted decisions made by a robotic agent that automatically distributed token reward, effort and risk among participant dyads by perceieving their physical characteristics. Our findings contribute towards the development of a moral robot, and enabling decision making considering social norms.
Fichier principal
Vignette du fichier
IROS2024 (2).pdf (17.91 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04695550 , version 1 (12-09-2024)

Identifiants

  • HAL Id : hal-04695550 , version 1

Citer

Sandra Victor, Bruno Yun, Chefou MamadouToura, Enzo Indino, Pierre Bisquert, et al.. Towards a moral robot: Reward, effort, and risk distribution to humans following social norms. 2024. ⟨hal-04695550⟩
0 Consultations
0 Téléchargements

Partager

More