Clear Sky Science · en
Mechanism of public behavioral intention to use generative AI for folk story image co-creation
Why stories and smart tools now go hand in hand
Folk stories are some of humanity’s oldest treasures, but in a world ruled by short videos and scrolling feeds, they struggle to compete. This study asks a timely question: can ordinary people use generative artificial intelligence—image-making tools like those behind today’s AI art—to help keep traditional tales alive? And just as importantly, what makes someone decide whether they actually want to use these tools to co-create images for folk stories?
Old tales in a new media world
Across countries, folk stories are officially celebrated as part of “intangible cultural heritage,” yet they often receive little real protection or public attention. Most are still passed on through spoken storytelling or printed text, formats that can feel distant in a visually saturated digital environment. Museums and archives preserve materials but rarely invite everyday people to participate. Generative AI changes this landscape by letting non‑experts turn simple prompts into rich images, lowering the technical barrier to visual storytelling. The authors argue that this shift could turn passive audiences into active collaborators in reshaping and sharing traditional tales.

What shapes people’s willingness to join in
To unpack why someone would or would not use AI to co‑create folk story images, the researchers blended two well‑known behavior theories: the Technology Acceptance Model and the Theory of Planned Behavior. From these they drew classic ingredients such as how useful and easy to use people think a tool is, how positively they feel about using it, how much influence they sense from friends or society, and how much control they believe they have over the process. They then added three fresh elements tailored to this cultural setting: how people judge the quality and emotional impact of AI‑generated images, how confident they feel in their own ability to use AI creatively, and whether they carry a bias against works known to be made by AI rather than humans.
From survey answers to hidden patterns
The team collected 682 online survey responses from adults in China, most of whom were familiar with both AI tools and traditional stories. Participants saw examples of AI‑generated images and hand‑drawn pictures based on the same folk tale, then rated statements about their feelings, expectations, and intentions on a five‑point scale. The researchers first used a statistical technique called structural equation modeling to test which factors directly or indirectly pushed people toward or away from using AI for story image co‑creation. They then fed the results into several machine‑learning models, which treated the hidden psychological factors as inputs and learned to predict whether a person had strong or weak intentions to use AI, allowing the team to explore both simple and more tangled, non‑linear relationships.

Hidden drivers: quality, doubt, and confidence
The analysis revealed that two forces pull in opposite directions. When people see AI‑generated folk story images as high in technical polish, meaning, and emotional impact, their willingness to use AI rises sharply once quality passes a certain threshold. But when they hold a strong bias against the very idea of AI as a cultural creator—preferring works they believe are human‑made—their intention drops steadily, regardless of actual quality. This identity bias also dampens their social sense that “people around me approve of this,” weakening the supportive effect of group norms. At the same time, inner confidence and a feeling of control matter: people who believe they can handle the tools and steer the results are much more likely to join in, especially when the tools feel genuinely easy to use and aligned with their expectations.
What the findings mean for the future of folk tales
In plain terms, the study shows that people are willing to use generative AI to revitalize folk stories if three conditions are met: the images must feel emotionally and culturally satisfying, the tools must feel approachable and responsive, and users must feel that they—not the machine—remain the true storytellers. Poor‑quality output, clumsy interfaces, or a sense that “AI has no right to speak for our culture” can all undermine that willingness. The authors suggest that designers, educators, and cultural institutions focus on raising the artistic and cultural quality of AI images, making interfaces friendlier, building learning pathways that boost users’ confidence, and framing AI clearly as a helper rather than a replacement for human storytellers. Under those conditions, generative AI could become a powerful ally in keeping folk stories vibrant for future generations.
Citation: Kong, X., Liu, Y., Shi, Y. et al. Mechanism of public behavioral intention to use generative AI for folk story image co-creation. npj Herit. Sci. 14, 164 (2026). https://doi.org/10.1038/s40494-025-02285-7
Keywords: generative AI, folk stories, cultural heritage, public participation, technology acceptance