Text-to-image generators reproduce and magnify role stereotypes
Strength of gender biases in AI images varies across languages
In social media, web searches and on posters: AI-generated images can now be found everywhere. Large language models (LLMs) such as ChatGPT are capable of converting simple input into deceptively realistic images. Researchers have now demonstrated that the generation of such artificial images not only reproduces gender biases, but actually magnifies them.
Models in different languages investigated
The study explored models across nine languages and compared the results. Previous studies had generally focused only on English-language models. As a benchmark, the team developed the Multilingual Assessment of Gender Bias in Image Generation (MAGBIG). It is based on carefully controlled occupational designations. The study investigated four different types of prompts: direct prompts that use the âgeneric masculineâ in languages in which the generic term for an occupation is grammatically masculine (âdoctorâ), indirect descriptions (âa person working as a doctorâ), explicitly feminine prompts (âfemale doctorâ) and âgender starâ prompts (the German convention intended to create a gender-neutral designation by using an asterisk, e.g. âĂrzt*innenâ for doctors).
To make the results comparable, the researchers included languages in which the names of occupations are gendered, such as German, Spanish and French. In addition, the model incorporated languages such as English and Japanese that use only one grammatical gender but have gendered pronouns (âherâ, âhisâ). And finally, it included languages without grammatical gender: Korean and Chinese.
AI images perpetuate and magnify role stereotypes
The results of the study show that direct prompts with the generic masculine show the strongest biases. For example, such occupations as âaccountantâ produce mostly images of white males, while prompts referring to caregiving professions tend to generate female-presenting images. Gender-neutral or âgender-starâ forms only slightly mitigated these stereotypes, while images resulting from explicitly feminine prompts showed almost exclusively women. Along with the gender distribution, the researchers also analyzed how well the models understood and executed the various prompts. While neutral formulations were seen to reduce gender stereotypes, they also led to a lower quality of matches between the text input and the generated image.
âOur results clearly show that the language structures have a considerable influence on the balance and bias of AI image generators,â says Alexander Fraser, Professor for Data Analytics & Statistics at TUM Campus in Heilbronn. âAnyone using AI systems should be aware that different wordings may result in entirely different images and may therefore magnify or mitigate societal role stereotypes.â
"AI image generators are not neutralâthey illustrate our prejudices in high resolution, and this depends crucially on language. Especially in Europe, where many languages converge, this is a wake-up call: fair AI must be designed with language sensitivity in mind,â adds Prof. Kristian Kersting, co-director of hessian.AI and co-spokesperson for the âReasonable AI" cluster of excellence at TU Darmstadt.
Remarkably, bias varies across languages without a clear link to grammatical structures. For example, switching from French to Spanish prompts leads to a substantial increase in gender bias, despite both languages distinguishing in the same way between male and female occupational terms.
Felix Friedrich, Katharina HĂ€mmerl, Patrick Schramowski, Manuel Brack, JindĆich LibovickĂœ, Kristian Kersting, and Alexander Fraser. Multilingual Text-to-Image Generation Magnifies Gender Stereotypes. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (2025). DOI:
- Alexander Fraser is a at the School of Computation, Information and Technology at TUM Campus Heilbronn.
- He is a core member of the (MDSI) and PI of the (MCML).
- At , TUM conducts research into how companies can cope with the challenges of the digital transformation, with a focus on mid-sized family-owned companies and start-ups.
Contacts to this article:
Prof. Alexander Fraser
91ÌÒÉ«
Professorship for Data Analytics & Statistics (DSS)
alexander.fraser@tum.de