Uno studio recente ha indagato sull’influenza delle descrizioni antropomorfizzate dei sistemi di IA sull’autovalutazione della fiducia delle persone.
Secondo il documento, le descrizioni che attribuiscono caratteristiche umane ai sistemi tecnici non sembrano aumentare significativamente la fiducia degli individui rispetto a descrizioni più neutrali. Il team di ricerca, basandosi su sondaggi condotti su un campione di 954 partecipanti, ha osservato che variabili come il tipo di prodotto e l’età degli utenti influenzano maggiormente il livello di fiducia, piuttosto che il semplice linguaggio antropomorfizzante.
Questi risultati sollevano interrogativi sul ruolo dell’antropomorfizzazione nell’accrescere la fiducia nel pubblico verso i sistemi tecnologici moderni e, inoltre, sottolineano l’importanza di considerazioni più complesse nella comunicazione sui prodotti di IA.
Leggi il documento completo: From “AI” to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust? su arxiv.org.
From “AI” to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust?
This paper investigates the influence of anthropomorphized descriptions of so-called “AI” (artificial intelligence) systems on people’s self-assessment of trust in the system. Building on prior work, we define four categories of anthropomorphization (1. Properties of a cognizer, 2. Agency, 3. Biological metaphors, and 4. Properties of a communicator). We use a survey-based approach (n=954) to investigate whether participants are likely to trust one of two (fictitious) “AI” systems by randomly assigning people to see either an anthropomorphized or a de-anthropomorphized description of the systems. We find that participants are no more likely to trust anthropomorphized over de-anthropmorphized product descriptions overall. The type of product or system in combination with different anthropomorphic categories appears to exert greater influence on trust than anthropomorphizing language alone, and age is the only demographic factor that significantly correlates with people’s preference for anthropomorphized or de-anthropomorphized descriptions. When elaborating on their choices, participants highlight factors such as lesser of two evils, lower or higher stakes contexts, and human favoritism as driving motivations when choosing between product A and B, irrespective of whether they saw an anthropomorphized or a de-anthropomorphized description of the product. Our results suggest that “anthropomorphism” in “AI” descriptions is an aggregate concept that may influence different groups differently, and provide nuance to the discussion of whether anthropomorphization leads to higher trust and over-reliance by the general public in systems sold as “AI”.
Foto di Tara Winstead su Pexels.