Human vs. AI Influence: How People Respond to Information in Objective Task
HIGHLIGHTS:
- Preference for AI in Objective Tasks: People tend to conform to information provided by AI agents in objective tasks, such as counting.
- Human Influence in Subjective Tasks: In subjective tasks related to attributing meaning to images, humans exert a stronger influence on individuals.
- Perceived Informativeness: Participants perceived both human and AI sources as equally informative, suggesting recognition of the value of both sources of information.
- Preference for AI Accuracy: Participants tended to adjust their estimates towards the numbers proposed by AI, regardless of whether AI overestimated or underestimated the results.
- Complex Interplay: The study highlights the complex interplay between humans and AI agents in shaping social behavior and decision-making processes.
People trust information given by humans versus artificial intelligence (AI) agents according to new research published in Acta Psychologica.
The study found that individuals were more likely to conform to AI-provided information in objective tasks. That is, tasks involving counting, while they were more influenced by humans in subjective tasks related to attributing meaning to images.
The study aimed to compare the impact of human and AI information on social influence.
Traditionally, social influence has been associated with humans influencing each other’s thoughts, emotions, and behaviors.
Why People Trust AI More Than Humans in Objective Task
However, the rise of AI and non-human agents like chatbots and virtual assistants. Also, robots has expanded the sources of potential social influence beyond humans alone.
Lead author Paolo Riva and his team wanted to examine how much people would be influenced by information. That is, the information provided by humans versus AI agents, depending on the task at hand.
They predicted that AI would have more influence in objective tasks, such as counting. They predicted humans would be more influential in subjective tasks involving attributing meaning.
The researchers conducted two experiments, one objective and one subjective.
The first study involved participants estimating the number of dots on images. They were shown a set of images with dots and asked to provide their estimates.
Later, they were presented with two estimations, one from an AI and the other from a human. The AI and human estimates systematically varied in either overestimating or underestimating the number of dots.
Participants’ own estimates were influenced by the AI’s estimation, showing a greater conformity to the AI’s information.
In the second study, participants were presented with images from the card game Dixit. They were asked to rate the association between the images and two concepts.
That is, one proposed by an AI and the other by a human. Results showed that the human’s concept had a greater influence on the participants.
The Findings
However, participants found both the human and the AI to be equally informative sources.
Overall, the findings demonstrated that individuals can conform more to AI agents than humans in a digital context. This is particularly in objective tasks where uncertainty is involved.
However, for subjective tasks, humans were still perceived as the most credible source of influence compared to AI agents.
In addition to the aforementioned findings, the study revealed several other interesting insights. One notable finding was that participants showed a greater tendency to adjust their estimates towards the number proposed by the AI.
This was regardless of whether the AI overestimated or underestimated the results. This suggests that individuals perceived the AI’s estimations as more accurate and trustworthy.
Moreover, when participants were presented with images from the card game Dixit and asked to rate the concepts associated with those images, the influence of the human source was found to be stronger.
Participants were more likely to conform to the concept proposed by the human. This indicates that in subjective tasks involving attributing meaning, humans remained the most influential source compared to AI agents.
The Dynamics of Human AI Interaction: Factors Influencing Social Influence
However, when explicitly asked about which source they found more informative, participants’ responses were evenly divided between the human and the AI.
This indicates that participants recognized the value of both sources of information. Also, they did not show a clear preference for one over the other in terms of perceived informativeness.
These findings provide valuable insights into the dynamics of social influence in the context of human-AI interactions.
They suggest that while AI agents can exert a significant influence on individuals’ decision-making, the extent of their influence may depend on the nature of the task and the perception of accuracy.
It highlights the complex interplay between humans and AI agents in shaping social behavior and decision-making processes.
However, it is important to note that the study did not explore the attribution of mental states to the agents of influence. Also, it remains unclear whether the influence persists when the source of influence is no longer present.
Further research is needed to delve deeper into these aspects. This is to gain a more comprehensive understanding of the implications of human-AI interaction on social behavior.
Over Dependence on AI and Cognitive function
The potential for overdependence on AI to cause cognitive decline is a complex and multifaceted issue. While AI can enhance and augment human capabilities in various domains, excessive reliance on AI for cognitive tasks may have unintended consequences.
One concern is the potential for reduced cognitive engagement and mental stimulation. This is when relying heavily on AI for decision-making and problem-solving.
When individuals rely solely on AI algorithms to provide solutions, they may become less inclined to actively think critically, analyze information, and exercise their cognitive abilities.
This lack of mental stimulation and reduced cognitive effort could potentially lead to a decline in cognitive skills over time.
Additionally, overdependence on AI may contribute to the erosion of certain cognitive skills and knowledge domains.
If individuals rely exclusively on AI systems to perform tasks that were once within their expertise, they may gradually lose proficiency in those areas.
For example, if professionals heavily rely on AI tools for data analysis or language translation, they may experience a decline in their own analytical or linguistic abilities.
Another aspect to consider is the potential for information overload and decreased information processing capabilities.
AI systems can provide vast amounts of information and recommendations, but individuals may struggle to effectively process and evaluate the overwhelming volume of data.
This can lead to cognitive overload, decision fatigue, and reduced ability to make independent judgments.
READ MORE:
- AI and White-Collar Work: Opportunities for Novice Workers, Challenges for Established Professionals
Overdependence on AI and Cognitive Decline: Balancing AI Assistance and Cognitive Engagement
Furthermore, there is the risk of complacency and reduced vigilance when relying on AI systems for error detection or decision-making.
If individuals develop blind trust in AI without critically assessing its outputs, they may become less attentive to potential errors or biases in the system’s functioning.
This can have detrimental effects on decision quality and problem-solving skills.
However, it is important to note that the impact of AI on cognitive decline is still a topic of ongoing research, and the extent to which overdependence on AI leads to cognitive decline may vary across individuals and contexts.
It is crucial for individuals to strike a balance between leveraging AI’s benefits and maintaining active cognitive engagement to ensure the preservation and development of their cognitive abilities.