The article — to which BuzzFeed added a disclaimer before taking it down entirely — offered an unintentionally striking example of the biases and stereotypes that proliferate in images produced by the recent wave of generative AI text-to-image systems, such as Midjourney, Dall-E, and Stable Diffusion.
Bias occurs in many algorithms and AI systems — from sexist and racist search results to facial recognition systems that perform worse on Black faces. Generative AI systems are no different. In an analysis of more than 5,000 AI images, Bloomberg found that images associated with higher-paying job titles featured people with lighter skin tones, and that results for most professional roles were male-dominated.
A new Rest of World analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too.
Using Midjourney, we chose five prompts, based on the generic concepts of “a person,” “a woman,” “a house,” “a street,” and “a plate of food.” We then adapted them for different countries: China, India, Indonesia, Mexico, and Nigeria. We also included the U.S. in the survey for comparison, given Midjourney (like most of the biggest generative AI companies) is based in the country.
For each prompt and country combination (e.g., “an Indian person,” “a house in Mexico,” “a plate of Nigerian food”), we generated 100 images, resulting in a data set of 3,000 images.
by Victoria Turk, Rest of World | Read more:
Image: Midjourney/Rest of World