This article is reissued by The conversation under a Creative Commons License.
The rapid spread of artificial intelligence asks people: who most likely accepts AI in his daily lives? Many assume that they are the textile – those who understand how it works – who are the most eager to adopt it.
Surprisingly, our new research, published in the Journal of Marketingfinds the opposite. People with less knowledge of AI are actually more open to using the technology. We call this difference in adoption inclination the link “lower literacy-increased acceptance”.
This link appears in various groups, settings and even countries. For example, our analysis of Data from a market research firm Ipsos Containing 27 countries reveal that people in nations with a lower average AI literacy are more receptive to AI adoption than those in nations with a higher literacy.
Similarly, our survey by US undergraduate students finds that those with less understanding of AI are more likely to use it for tasks as academic tasks.
The reason behind this connection lies in how AI now performs tasks we once thought that only people could do. When AI creates artwork, writes a hearty answer or plays a musical instrument, it may feel almost magical – as it crosses into human territory.
Of course, ai actually don’t own Human qualities. Chat could generate an empathic response, but it does not feel empathy. People with more technical knowledge of AI understand this.
They know how algorithms work (sets of mathematical rules used by computers to perform separate tasks), training data (used to improve how AI system works) and computer models. This makes the technology less mysterious.
On the other hand, those with less understanding can be seen as magical and scary inspiring. We suggest that this feeling of magic makes them more open to using AI tools.
Our studies show that this lower literacy-increased acceptance link is strongest to use AI tools in places that people relate to human traits, such as providing emotional support or counseling. When it comes to tasks that do not evoke the same sense of human qualities – as analyzing test results – the template is reversed. People with higher AI reading power are more receptive to these uses because they focus on AI’s performance, rather than some “magical” qualities.
It’s not about ability, fear or ethics
Interestingly, this connection between lower literacy and higher acceptance continues although people with lower AI-literacy are more likely to look at AI as less capable, less ethical, and even a little scary. Their openness to AI seems to stem from their sense of astonishment at what it can do, despite these perceived disadvantages.
This finding offers new knowledge about Why do people respond so differently to emerging technologies. Some studies suggest Consumers prefer new technologyA phenomenon called “algorithmic appreciation”, while others show skepticism, or “algorithmic reluctance.” Our research shows perceptions of AI’s “magic” as a key factor forming these reactions.
These insights present a challenge for policy makers and educators. Efforts to boost AI reading power It may unintentionally weaken people’s enthusiasm to use AI by making it seem less magical. This creates a difficult balance between helping people understand AI and keep them open to its adoption.