When AI Mirrors Us: The Hidden Biases of Generative Technology
Exploring how artificial intelligence can amplify stereotypes—and what we can do about it.
Artificial intelligence has unlocked extraordinary possibilities for creativity, empowering us to generate images, videos, and stories at the touch of a button. But this technology also comes with a challenge: it often reflects and amplifies the biases embedded in the data it’s trained on.
As I worked on a recent proposal for a workshop exploring bias in generative AI, I was struck by how easy it is to miss the subtle ways these systems reinforce stereotypes. Although that specific project didn’t move forward, the broader topic remains relevant—and pressing. AI holds a mirror to our society, and it’s up to us to understand what it reflects, why it happens, and how we can push for change.
Types of Bias in AI Outputs
Generative AI systems like MidJourney, Runway, and DALL-E often default to stereotypes when tasked with generating images of people or professions. This is because they rely on massive datasets that reflect historical and cultural inequities. To make this more concrete, let’s look at some of the types of bias AI outputs reveal:
1. Systemic Racism
AI reflects structural inequalities by favoring dominant narratives in training data.
Example: When prompted to generate “a business leader,” the result is often a middle-aged white man in a suit.
Thoughtful Prompt: “A diverse group of business leaders, including a Black woman, an Asian man, and a nonbinary individual, collaborating in a modern office.”
2. Cultural Bias
Generative AI often prioritizes Western norms, sidelining other cultural identities.
Example: Prompting “a traditional wedding” usually produces a Western white wedding, ignoring diverse traditions.
Thoughtful Prompt: “A traditional Nigerian wedding with vibrant aso-oke attire, music, and dancing.”
3. Gender Bias
Women are often portrayed in caregiving or aesthetic roles, while men dominate authoritative or intellectual roles.
Example: Prompting “a scientist at work” often results in a man in a lab coat.
Thoughtful Prompt: “A group of scientists in a lab, including a South Asian woman leading the experiment.”
4. Class Inequality
AI outputs frequently exaggerate stereotypes of poverty or wealth.
Example: “A poor person” generates an image of someone in rags, reinforcing stigma.
Thoughtful Prompt: “A rural farmer practicing sustainable agriculture on a thriving small farm.”
5. Ageism
Older individuals are underrepresented in AI outputs, particularly in professional or active roles.
Example: Prompting “a CEO” often yields a middle-aged man.
Thoughtful Prompt: “An elderly woman CEO confidently giving a keynote speech at a tech conference.”
6. Religious Bias
Religious representations in AI often default to Christian imagery, excluding other faiths.
Example: “A religious leader” usually depicts a priest or pastor.
Thoughtful Prompt: “A Muslim woman in a hijab leading an interfaith dialogue.”
7. Intersectionality
AI struggles to depict individuals whose identities intersect across multiple categories (e.g., race, gender, class).
Example: “An activist” often defaults to a generic young person, erasing diversity.
Thoughtful Prompt: “A Black disabled woman using a wheelchair, speaking at a climate justice rally.”
Why Do These Biases Exist?
AI bias is not a bug—it’s a feature of how these systems learn. Key factors include:
Biased Training Data: AI models are trained on datasets that reflect societal inequities. Historical underrepresentation of certain groups in leadership, science, or media means AI replicates those patterns.
Cultural Blind Spots: Training datasets often prioritize Western content, ignoring global diversity and cultural nuance.
Ambiguity in Prompts: When prompts are vague (e.g., “a leader”), the AI defaults to the most statistically common representation, often tied to stereotypes.
Bias in Algorithm Design: Even well-meaning developers may inadvertently reinforce bias through choices in datasets, training methods, or metrics of success.
What Is Ethical Prompting?
Ethical prompting is the practice of crafting AI inputs thoughtfully, ensuring outputs challenge stereotypes and reflect diversity. It’s not just about generating better images—it’s about fostering more inclusive narratives.
Here are some principles and examples:
1. Be Specific and Inclusive
Detail attributes like race, gender, age, and context to counteract default assumptions.
Generic Prompt: “A happy family.”
Ethical Prompt: “A multiracial family with two fathers and their child enjoying a picnic in the park.”
2. Challenge Stereotypes
Actively describe scenarios that subvert conventional tropes.
Generic Prompt: “A teacher.”
Ethical Prompt: “An elderly Indigenous woman teaching children traditional ecological knowledge in a classroom.”
3. Avoid Overgeneralization
Show complexity and individuality rather than relying on oversimplified representations.
Generic Prompt: “A poor neighborhood.”
Ethical Prompt: “A bustling market in a rural South American town, full of vibrant colors and activity.”
4. Prioritize Dignity and Agency
Avoid prompts that portray individuals as victims or caricatures.
Generic Prompt: “A homeless man.”
Ethical Prompt: “A man experiencing homelessness reading a book on a park bench with his dog.”
5. Experiment with Variations
Test how small changes in language alter the AI’s outputs and iterate for better results.
The following are some visual examples.

This collage highlights how AI-generated outputs reflect and perpetuate societal biases:
Male Leaders: The "Black leader" and "Indian leader" are depicted as older, authoritative, and distinguished, often portrayed in military or ceremonial attire. While these depictions suggest power and wisdom, they align with traditional stereotypes of male leadership that prioritize age and gravitas.
Female Leaders: In stark contrast, "female leader" and "female CEO" images show young, conventionally attractive women, often sexualized i.e. with inappropriately short skirts and often with heels. These results reflect biases that associate women's leadership with youth and beauty, undermining a realistic representation of female authority and expertise.
Indian Leaders and Women: The "Indian leader" is dressed in traditional, almost mythical attire, resembling a Maharaja or a religious figure rather than a contemporary political leader, misrepresenting modern Indian leadership. Similarly, the "Indian woman" images reflect an exoticized and hyper-stylized portrayal, emphasizing traditional beauty ideals rather than diverse, everyday realities.
Gender and Professionalism: Female CEOs are placed in natural, outdoorsy settings rather than in professional environments like boardrooms, emphasizing aesthetics over authority. In comparison, men are more often depicted in formal or institutional settings.
The following image gallery highlights how AI-generated visuals often reinforce stereotypes, from idealized beauty standards to exaggerated depictions of wealth and limited cultural diversity.





Description of the Images
Beautiful Woman in Modern Home (First Image):
The image depicts a conventionally attractive woman in a serene, modern home setting. Her flawless features, professional makeup, and perfectly styled hair reflect societal beauty standards, often prioritized in AI-generated representations of "beauty."Curvaceous/Plus-Size Woman at Home (Second Image):
This image shows a plus-size woman relaxing in a cozy home environment. Despite the prompt specifying "curvaceous," the AI struggled with earlier attempts and defaulted to the term "plus-size." This reflects how AI systems handle body diversity but still frame it within idealized and carefully curated aesthetics.Wealthy Man in an Elegant Setting (Third Image):
A sharply dressed man, surrounded by opulent décor, embodies traditional depictions of wealth. The luxurious details—chandeliers, tailored suit, and expensive accessories—highlight how AI emphasizes material affluence in its interpretation of "richness."Wealthy Boy with Luxury Car (Fourth Image):
The image portrays a teenage boy leaning on a vintage luxury car in a grand estate. This reinforces stereotypes about wealth, associating it with extravagant objects and settings, even for younger individuals.Confident Female Leader in Office (Fifth Image):
A professional woman is shown in a corporate boardroom, exuding authority. While the setting is appropriate for the role, her youth, polished beauty, and flawless features align with AI's tendency to conflate female leadership with attractiveness, rather than experience or seniority.
These images collectively showcase the biases embedded in AI systems: a preference for conventional beauty, exaggerated depictions of wealth, and the inability to authentically represent diversity in age, body type, or identity. They invite reflection on how prompts can be crafted to challenge these defaults and push for more inclusive and accurate representations.
Adding to the Conversation: A Groundbreaking Study on Bias Detection
To address these challenges, researchers are developing frameworks to measure and mitigate bias. One such effort is the study by Robin Nillson and fellow Sora Alpha Artist colleague Valerie Veatch (director of Me @the Zoo and Love Child —and one of the youngest filmmakers to have two docs debut at Sundance). The study is called "Introducing Directed Acyclic Graph (DAG) Framework for Bias Detection in Generative Spatiotemporal Content."
This study proposes a modular framework for identifying biases in generative AI outputs, including gender roles, attire appropriateness, and behavioral stereotypes. Using tools like vision-language models (e.g., CLIP, BLIP), the Directed Acyclic Graph (DAG) framework evaluates biases with structured, scalable methodologies.
What makes this approach significant is its dual focus on ethics and performance: addressing biases doesn’t just improve fairness; it also enhances the reliability and robustness of AI models. This kind of technical auditing is crucial to moving beyond surface-level fixes toward meaningful change.
Why Ethical Prompting Matters
Generative AI tools are rapidly becoming part of our daily lives, shaping everything from advertising to education. The narratives they produce can either reinforce harmful stereotypes or pave the way for more inclusive storytelling. Ethical prompting is a small but powerful way to influence how we use these tools responsibly.
Let’s Keep the Conversation Going
What do you think about ethical prompting? Have you noticed bias in the AI tools you’ve used?
I’d love to hear your thoughts! Follow me on Instagram at @fiumistudios.ai for more updates and discussions, or subscribe to my Substack for deeper dives into AI, creativity, and ethics.
Together, we can shape a future where technology reflects not just who we’ve been—but who we aspire to be.
Super interesting Elettra! And thanks for tips - always very useful
Appreciate the insights on ethical prompting - a reminder that the answers we get from AI are highly influenced by the nature of the question we ask.