Published : 2025-12-29

Hidden Algorithms of Culture: A Review and Critical Analysis of Cultural Bias in General-Purpose Generative AI Chatbots

Abstract

The aim of the article is to review and systematise the results of the latest empirical studies on the manifestations of cultural bias in the content produced by general-purpose generative AI chatbots such as ChatGPT, Copilot, Gemini, Claude and DeepSeek, and to identify their potential social consequences. The following research questions were formulated: What types and what is the scale of cultural biases in generative AI chatbots? What are the social consequences of their occurrence and possible ways and directions of counteraction? A review study was based on a critical analysis of 17 recent empirical studies published in 2024-2025. The analysis shows the complex nature of the presence and consequences of cultural bias in current AI models. It has been clearly demonstrated that they reflect and reinforce Western cultural patterns. Four types of cultural bias have been identified: axiological-civilisational, racial-ethnic, national, and religious-ideological. The analysis also showed that cultural bias is not only a technical problem of algorithms, but a deeply rooted social phenomenon resulting from the contexts of training data and design decisions made by technology developers.

Keywords:

cultural bias, stereotypes, artificial intelligence, AI chatbots, LLM



Details

References

Statistics

Authors

Download files

pdf

Citation rules

Wysocki, A. (2025). Hidden Algorithms of Culture: A Review and Critical Analysis of Cultural Bias in General-Purpose Generative AI Chatbots. Roczniki Nauk Społecznych, 53(4), 7–21. https://doi.org/10.18290/rns2025.0043

Altmetric indicators


Cited by / Share



Artykuły w czasopiśmie dostępne są na licencji Creative Commons Uznanie autorstwa – Użycie niekomercyjne – Bez utworów zależnych 4.0 Międzynarodowe (CC BY-NC-ND 4.0)