Misogyny by Math.
The Algorithmic Multiplication of Inequality.

AI transforms intersectionality from a framework for understanding oppression into an engine that manufactures it at scale.

In 1989, legal scholar Kimberlé Crenshaw gave us a word for something Black women had always known: oppression doesn't add, it multiplies. She called it intersectionality. The term emerged from a specific legal case where courts refused to recognize that discrimination against Black women was distinct from discrimination against Black men or white women. The law could see race or gender, but not both simultaneously.

Today, Crenshaw's framework has found precise expression in algorithmic systems that recognize intersectionality only to weaponize it. A Black woman over 50 working in rural administration faces one disadvantage raised to the fourth power, computed with relentless efficiency.

AI transforms intersectionality from a framework for understanding oppression into an engine that manufactures it at scale. The overlapping vulnerabilities of race, gender, class, age, and geography that Crenshaw identified become the exact data points that algorithms use to sort, exclude, and discriminate with mechanical consistency.

The Mathematical Architecture of Exclusion

To understand this transformation, start with how AI actually works. Machine learning systems identify patterns in data and use those patterns to make predictions. When Amazon built its resume-screening AI, it fed the system ten years of hiring data. The AI learned that successful technical employees were overwhelmingly male. It began downgrading resumes containing the word "women's," as in "women's chess club" or "women's college."

The system was doing exactly what it was designed to do: identify patterns that predicted success as Amazon had defined it. The discrimination emerged from algorithmic processing of biased history.

Now multiply this process across every system making decisions about employment, credit, healthcare, education, and criminal justice. Each system learns from data that encodes centuries of discrimination. But unlike human bias, which might be inconsistent or contestable, algorithmic bias operates with perfect consistency and institutional authority.

Princeton and Columbia research teams demonstrated this multiplication effect directly. They showed AI evaluators identical resumes with names suggesting different racial and gender identities. The systems discriminated intersectionally. Black women received the lowest ratings, then Black men and white women, then white men. The AI had learned to recognize and penalize intersectional identity through pattern recognition.

This systematic approach makes discrimination deniable. The algorithm never explicitly considers race or gender. It considers ZIP codes that correlate with race, shopping patterns that correlate with gender, names that correlate with both. The discrimination becomes laundered through proxy variables, nearly impossible to prove legally while remaining effective practically.

The Compound Interest of Disadvantage

The IMF estimates that 60 percent of jobs in advanced economies face AI exposure. But this average conceals how exposure compounds through intersectional vulnerability. Women hold 73 percent of clerical and administrative roles globally. These positions face the highest automation risk. Older workers in these roles face 50 percent higher displacement risk than younger colleagues. Rural workers have 40 percent fewer alternative opportunities when displacement occurs.

Watch the multiplication: A 55-year-old woman in rural administration faces 73 percent times 1.5 times 1.4 compounding vulnerability. The math becomes catastrophic.

This multiplication extends beyond job loss to the capacity for adaptation. The OECD documents that women spend 4.5 hours daily on unpaid care work versus 1.5 hours for men. This three-hour daily gap means women have systematically less time for retraining. Geographic isolation in rural areas brings 38 percent less access to high-speed internet. Age discrimination makes workers over 50 half as likely to be accepted into retraining programs.

Each additional identity marker multiplies impossibility. Take Sumi, who worked the sewing line in Dhaka for twelve years. When automated cutting machines arrived, she was told to train them. Show the sensors how to recognize fabric flaws. Teach the system her expertise. Three months later, the machine had learned enough. Her knowledge had been extracted, digitized, scaled. She was redundant.

She tried the new textile firms. They wanted technical skills she lacked. She tried the coding bootcamp her brother suggested. Classes ran during the hours she cared for her mother. She tried online courses. Power cuts made consistent study impossible. Each path closed as she approached it. Her story repeats across Bangladesh's garment sector, where women comprise 80 percent of workers and automation eliminated 31 percent of jobs between 2013 and 2022.

The Extraction and Encoding of Intersectional Patterns

AI systems extract value from intersectional identities while encoding their marginalization as natural and inevitable. Consider the global content moderation industry. Platforms like Meta outsource traumatic content review to contractors in Kenya, the Philippines, and India. These workers, 78 percent women, earn $2 hourly to view horrific content that causes documented psychological damage.

Grace moderates content for a major platform from Nairobi. Eight hours daily reviewing violence, abuse, images that wake her at night. She trains the AI to recognize what violates guidelines. Each correction teaches the system. Each tag transfers her expertise into code. The AI systems these workers train learn from their labor while simultaneously learning that workers with their profiles deserve minimal compensation.

This extraction operates through what scholars call "data colonialism." Women in the Global South generate data through their digital labor, their platform work, their social media participation. This data trains AI systems that primarily benefit companies in the Global North. But unlike historical colonialism, which required physical occupation, data colonialism operates through algorithmic extraction that appears voluntary and beneficial.

The University of Melbourne documented this process in gig economy platforms. Women receive 37 percent less per task than men for identical work. The platforms learn from every transaction. They discover that women accept lower wages, work less desirable hours, and tolerate worse conditions. The AI adjusts accordingly, offering women systematically worse opportunities, which they must accept due to limited alternatives, which reinforces the pattern.

Each interaction teaches the system that intersectional disadvantage represents information to be processed rather than discrimination to be corrected. This encoded learning then shapes how millions of women will be evaluated, creating a cascade where individual exploitation becomes systemic exclusion.

The Impossibility of Individual Solution

The cascade from exploitation to exclusion: that's why the individual solutions fail. The standard prescription for technological displacement is reskilling. Learn AI. Adapt to change. Upgrade your capabilities. This prescription ignores how intersectionality makes individual adaptation structurally impossible for those most vulnerable.

Consider what "learning to code", for example, actually requires. Time for study, which care responsibilities prevent. Money for training, which low wages prevent. Internet access, which rural location prevents. Confidence in technical spaces, which gender discrimination erodes. Belief that effort will be rewarded, which racial discrimination undermines.

The World Economic Forum found that women comprise 54 percent of retraining program participants but only 23 percent of those transitioning to higher-paid technical roles. The pipeline hemorrhages precisely where intersectional vulnerabilities compound. Women complete basic digital training but can't access advanced programs. They gain technical skills but face discrimination in technical hiring. They demonstrate competence but need 2.5 times more achievements to be considered equally qualified.

India: women are 43 percent of STEM graduates but only 14 percent of the engineering workforce. The education system produces qualified women, but intersectional discrimination prevents their participation. They face gender bias in hiring, caste discrimination in workplace culture, and family pressure to prioritize marriage over career. Each barrier reinforces the others, making individual perseverance insufficient against structural exclusion.

The Architecture of Inherited Destiny

Current AI systems are being built by a remarkably homogeneous group. Women comprise 22 percent of AI professionals globally. In venture capital, 89 percent of AI startup funding goes to all-male founding teams. The companies building humanity's algorithmic future are 78 percent male, overwhelmingly white and Asian, predominantly from privileged backgrounds.

This homogeneity shapes what gets built. The same intersectional disadvantages that exclude women from other fields operate with particular force in AI. Technical education requires time and money that intersectional disadvantage prevents. AI careers demand geographic mobility that care responsibilities limit. Venture capital flows through networks that historical exclusion created.

Those least likely to experience intersectional disadvantage design systems that encode it as natural. When Apple's health app tracked everything except menstruation, it showed whose bodies were considered default. When voice assistants responded to harassment with flirtation, it showed whose comfort mattered. When recruitment algorithms learned to discriminate against women with children, it demonstrated whose life patterns were deemed standard.

The young men building these systems in Palo Alto and Shenzhen share remarkably similar backgrounds. Elite universities. Parental support. Freedom from care responsibilities. They build what they know. Shape systems for patterns they recognize. The distance between their lived experience and Sumi's in Dhaka or Grace's in Nairobi might as well be measured in light years.

UNESCO's report "I'd Blush If I Could" documented how this architectural bias scales culturally. AI assistants with female voices and submissive personalities teach billions of daily interactions that female-coded entities exist to serve, defer, and tolerate abuse. Children absorb these patterns as normal, encoding intersectional subordination into the next generation's expectations.

Historical discrimination becomes computational destiny.

The Multiplication Future

The IMF projects that women's labor income share could decline by 4 percentage points by 2030 in advanced economies. But this average conceals how losses will concentrate intersectionally. The Black woman in rural America won't lose 4 percent. She'll lose her entire economic foothold. The Indigenous woman in Guatemala won't see gradual decline. She'll face complete exclusion from formal economy.

Current trajectories point toward a future where intersectional advantage compounds into extreme wealth while intersectional disadvantage compounds into complete marginalization. The venture capitalist's son in Silicon Valley will have AI assistants from birth, technical education from childhood, and network advantages that multiply exponentially. The garment worker's daughter in Dhaka will face algorithmic exclusion from education, employment, and opportunity at every turn.

A projection that follows algorithmic logic. When systems multiply advantage and disadvantage over time, gaps expand exponentially. Small initial differences become chasms. Temporary disadvantages become permanent exclusions. Historical discrimination becomes computational destiny.

Crenshaw's Framework as Diagnostic and Cure

Kimberlé Crenshaw gave us intersectionality as a lens to see what law refused to recognize. Today, her framework offers both diagnosis and direction. It reveals how AI transforms multiplication of oppression from metaphor to mathematics. But it also suggests that understanding multiplication enables intervention.

The Algorithmic Justice League, founded by Joy Buolamwini after experiencing firsthand how facial recognition failed to see her dark skin, now audits AI systems for intersectional bias. Their work forced IBM, Microsoft, and Amazon to pause facial recognition sales to police departments. This shows what organized resistance can achieve: making the multiplication of bias visible forced companies to reconsider deployment.

If we recognize that AI systems multiply disadvantage intersectionally, we can build differently. Instead of training on biased history, we could weight data to correct historical exclusion. Instead of maximizing narrow efficiency, we could include intersectional equity as a core metric. Instead of building systems that multiply disadvantage, we could create ones that multiply opportunity.

This requires recognizing that intersectional oppression represents systematic design rather than unfortunate accident. AI makes this design visible, mathematical, undeniable. The algorithm computes discrimination openly, processes it efficiently, scales it precisely.

The Power to Reshape

Will those who experience intersectional multiplication of disadvantage have any power to reshape AI in return? It doesn't look so, yet. The very characteristics that make people vulnerable to algorithmic discrimination exclude them from the rooms where AI gets designed. The multiplication machine operates without input from those it multiplies against.

Yet resistance emerges. Platform workers in Kenya, Venezuela, and the Philippines organize despite attempts to prevent collective action. Women technologists build alternative systems prioritizing care over efficiency. Communities document algorithmic harm and demand accountability.

Every morning, Grace opens her laptop in Nairobi. Reviews content that breaks her so AI can learn her patterns. But she also organizes with other moderators, documenting the psychological toll, building the case for change. Every morning, Sumi passes the factory in Dhaka where she once worked. Sees the machines running with knowledge she taught them. But she also teaches younger women what she's learned about extraction, about protecting their expertise.

Crenshaw showed us how to see intersectionality. Now we must learn to see how AI transforms it from analysis to algorithm, from framework to formula, from understanding to engine. The architecture of discrimination has never been more visible or more mathematical.

The multiplication machine runs on decisions that could be different. Intersectionality began as revelation. In the age of AI, it becomes algorithm. But algorithms are human creations, and what humans create, humans can redesign. The cascade flows, but every cascade can be redirected when we understand its architecture and organize to rebuild it.

The machine that multiplies exclusion could multiply opportunity. The patterns that encode discrimination could encode repair. The systems that compound disadvantage could compound justice. Not automatically. Not easily. But possibly, if those most affected gain power over the multiplication itself.

This is not destiny. It can be redesigned.

And if you're a man who read to this conclusion, you now hold knowledge that comes with choice. You likely sit closer to the rooms where these systems get built than Grace or Sumi ever will. You understand the mathematics of multiplication. You see how discrimination compounds through code.

The question becomes what you do in the meeting when someone says the bias is acceptable, the timeline is tight, the diversity hire can wait. Your silence multiplies the disadvantage. Your voice could multiply something else. The algorithm learns from the data we feed it, but first it learns from the decisions of those who build it. You might be one of them.

What you build next determines whether our daughters face algorithms that recognize their full humanity or reduce them to data points that deserve 37 percent less.

What will you teach AI about who deserves opportunity?

Sources

Books and Reports

  • Buolamwini, Joy. Unmasking AI: My Mission to Protect What Is Human in a World of Machines. New York: Random House, 2023.
  • Crenshaw, Kimberlé. "Mapping the Margins: Intersectionality, Identity Politics, and Violence Against Women of Color." Stanford Law Review 43, no. 6 (1991): 1241-1299.
  • Couldry, Nick, and Ulises A. Mejias. The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press, 2019.
  • International Labour Organization. World Employment and Social Outlook: The Role of Digital Labour Platforms in Transforming the World of Work. Geneva: ILO, 2021.
  • International Labour Organization. Care Work and Care Jobs for the Future of Decent Work. Geneva: ILO, 2023.
  • International Labour Organization. Global Report on AI and Work. Geneva: ILO, 2025.
  • International Monetary Fund. "Gen-AI: Artificial Intelligence and the Future of Work." IMF Working Paper WP/24/140. Washington, DC: IMF, 2024.
  • International Telecommunication Union. Measuring Digital Development: Facts and Figures 2024. Geneva: ITU, 2024.
  • Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press, 2018.
  • O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.
  • Organisation for Economic Co-operation and Development. OECD Employment Outlook 2025: Artificial Intelligence and the Labour Market. Paris: OECD Publishing, 2025.
  • Organisation for Economic Co-operation and Development. OECD Skills Outlook 2025. Paris: OECD Publishing, 2025.
  • Stanford Institute for Human-Centered Artificial Intelligence. AI Index Report 2025. Stanford: Stanford HAI, 2025.
  • UNESCO. I'd Blush If I Could: Closing Gender Divides in Digital Skills Through Education. Paris: UNESCO, 2023.
  • UNESCO. AI and Gender: A Missing Link for Achieving SDGs. Paris: UNESCO, 2024.
  • UN Women. Progress on the Sustainable Development Goals: The Gender Snapshot 2024. New York: UN Women, 2024.
  • World Bank. Women, Business and the Law 2024. Washington, DC: World Bank, 2024.
  • World Economic Forum. The Future of Jobs Report 2025. Geneva: WEF, 2025.

Academic Articles

  • Dastin, Jeffrey. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, October 10, 2018.
  • Dzieza, Josh. "AI Is a Lot of Work: The Data Labelers Behind AI's Boom Are Demanding Better Conditions." The Verge, June 20, 2023.
  • Hao, Karen. "The Human Work That Powers AI: Content Moderation in Kenya." MIT Technology Review, April 2022.
  • Kellogg, Katherine C., Melissa A. Valentine, and Angèle Christin. "Algorithms at Work: The New Contested Terrain of Control." Academy of Management Annals 14, no. 1 (2020): 366-410.
  • Perrigo, Billy. "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic." TIME, January 18, 2023.
  • Raji, Inioluwa Deborah, and Joy Buolamwini. "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products." Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429-435.

Research Papers and Studies

  • Asia Foundation. Women in the Bangladeshi Garment Sector: Challenges and Opportunities. San Francisco: Asia Foundation, 2023.
  • Fairwork Foundation. Fairwork Global Report 2024: Platform Labour Standards. Oxford: Fairwork, 2024.
  • Princeton University and Columbia University. "Resume Audit Studies: AI Discrimination in Hiring." Working Paper, 2024.
  • University of Melbourne. "Gender Pay Gaps in the Gig Economy: Evidence from Global Platforms." Melbourne Institute Working Paper, 2024.

Legal Cases and Policy Documents

  • European Commission. Proposal for a Regulation on Artificial Intelligence (AI Act). Brussels: EC, 2024.
  • Sama vs. Meta Platforms, Inc. Content Moderation Labor Practices Case. Kenya Employment and Labour Relations Court, 2023-2025.

Organizational Websites

  • Algorithmic Justice League. https://www.ajl.org