Abstract
This study aims to uncover the underlying psychological mechanism through which individuals attribute ethical responsibility to conversational artificial intelligence (AI). Furthermore, this study delves into the implications of AI’s unethical behavior on consumer evaluation. In Study 1, the results showed that participants in the high (vs. low) anthropomorphic AI condition judged greater responsibility for unethical behavior by AI, while lessening the AI developer’s ethical responsibility. In addition, the effect of anthropomorphism on ethical responsibility was mediated by perceived freewill. In Study 2, a significant interaction effect between perceived freewill and communication strategy is found, suggesting that when a high degree of AI freewill is perceived, the accommodative (vs. defensive) communication strategy is more effective in reducing the perception of the unethical behavior of AI. Conversely, the defensive strategy was more effective when perceived freewill was low. This study reveals the psychological mechanism through which individuals expect ethical responsibility from conversational AI, which has theoretical implications for broadening the understanding of human–AI interaction, and discusses the practical implications of proposing an AI communication strategy.
| Original language | English |
|---|---|
| Pages (from-to) | 847-873 |
| Number of pages | 27 |
| Journal | International Journal of Advertising |
| Volume | 43 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2023 Advertising Association.
Keywords
- Artificial intelligence (AI)
- anthropomorphism
- ethics
- freewill
- human–AI interaction
- situational crisis communication theory (SCCT)
ASJC Scopus subject areas
- Communication
- Marketing