The age of Artificial Intelligence (AI) is here whether we want it or not. And although it is already proving a useful tool for some tasks – both mundane and incredibly complex – its all-too-human biases are reinforcing discrimination against women, undermining them in the workplace and possibly increasing their risk of unemployment.
From information gathering and report writing to hiring decisions and promotional opportunities, current generative AI systems can amplify inequalities because they use flawed data, such as user-generated content from the internet. We all know from personal experience and news reports how unreliable and inaccurate that information can be when left unchecked.
Many GenAI systems train on large language models where they learn to understand and interpret information and perform tasks using existing public data which is steeped in stereotypes that favour western white men. This bias has immediate economic and social consequences in the real world.
“Despite AI’s potential to enhance sectors like healthcare, education and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations or healthcare diagnostics, systematically disadvantaging women,” according to Jerlyn Ho and other Singapore-based academics writing in the scholarly journal Computers in Human Behavior: Artificial Humans.
RM Block
Their paper explores how AI systems and chatbots, notably ChatGPT, can perpetuate gender biases due to inherent flaws in training data, algorithms and user feedback loops.
Despite its proven bias, GenAI is being used widely to decide who gets hired, fired and promoted. Research has shown that GenAI discriminates against women by spewing pseudoscientific “facts” and stereotypes of women as being less professionally ambitious or intelligent than men.
“For instance, in gendered word association tasks, recent models still associate woman names with traditional roles like ‘home’ and ‘family’, while linking male names with ‘business’ and ‘career’. Moreover, in text generation tasks, these models produce sexist and misogynistic content approximately 20 per cent of the time,” according to the European Commission’s (EC) Generative AI Outlook Report 2025.
“The growing integration of AI across various sectors has heightened concerns about biases in large language models, including those related to gender, religion, race, profession, nationality, age, physical appearance and socio-economic status,” it continues.
[ Women more exposed to jobs impact of AI, Government research findsOpens in new window ]
“While AI holds the promise of enhancing efficiency and decision making in areas like healthcare, education and business, its widespread use and the high level of public trust it enjoys could also amplify societal prejudices, leading to systematic disadvantages, particularly for women.”
Occupational bias
When gender and racial bias are baked into the technology we use every day, moving it from a possible to a structural barrier, it’s far harder to break through professionally.
GenAI is used extensively in the hiring process, from CV scanners and gamified tests to body language analysis and vocal assessments. Job applicants are facing machines before they see humans and it is increasingly AI that decides whether or not they are a good match or if the application gets sent to the recycling bin.
If technologies like this are biased against someone like you then you’re unlikely to be shortlisted for a role, get a foot in the door of your chosen profession or get a seat at the top table.
Despite decades of work trying to ensure greater diversity and inclusion in the workplace – which has been proven to improve decision making, risk taking and profitability – AI bias is threatening to take us backwards by reinforcing negative stereotypes instead of judging everyone on a level playing field.
At work, your professional image and public profile are often important factors in promotion. Yet the European Commission report found that in occupational portraits generated by three popular text-to-image AI generators: “Women and black individuals were notably underrepresented, especially in roles requiring high levels of preparation. Women were often portrayed as younger and with submissive gestures, while men appeared older and more authoritative.
[ Women are lagging behind on AI but they can catch upOpens in new window ]
“Alarmingly, these biases surpassed real-world disparities, indicating that the issues extend beyond merely biased training data.”
Internationally, women are increasingly at risk of being pushed out of the workforce and into the home as conservative governments in places such as the United States, Hungary and, more extremely, Afghanistan promote a return to traditional gender roles. AI-driven technology and many social media platforms are aggressively reinforcing these gender messages and influencing the next generation.
If GenAI is disadvantaging women and minorities in hiring and promotion, and they’re largely excluded from AI’s development and testing processes, why is it being blindly adopted as a workplace tool?
Research in Ireland and elsewhere shows many young men are more conservative than their grandfathers and far less progressive than their woman colleagues.
In recruitment, some AI algorithms are supporting this move by favouring male candidates over equally qualified woman candidates. The Netherlands Institute for Human Rights found a violation of Dutch and EU anti-discrimination legislation in Meta’s job vacancy advertising algorithm.
“In violation of the principles of equal treatment and non-discrimination, in 2023, the algorithm in the Netherlands displayed vacancies for receptionist positions to woman users in 97 per cent of cases. Similarly, it showed vacancies for mechanics to male users 96 per cent of the time.”
In education, AI may also unfairly predict higher dropout rates for woman students, particularly in male-dominated fields like science, technology, engineering and mathematics (Stem), limiting their access to advanced education programmes and jobs in higher-paid professions.
Fill in the blanks
Many GenAI models cannot distinguish between fact and fiction, believing that video game content and fictional novels are real, for example. And some even make stuff up to fill in the blanks if they don’t have enough information.
OpenAI’s website says of ChatGPT: “But like any language model, it can produce incorrect or misleading outputs. Sometimes, it might sound confident – even when it’s wrong. This phenomenon is often referred to as a hallucination: when the model produces responses that are not factually accurate, such as incorrect definitions, dates or facts.”
[ AI has its strong points. Intelligence isn’t one of themOpens in new window ]
Facts are important, especially when lives and livelihoods depend on them. Access to employment matters hugely as jobs are the gateway to opportunity and economic stability for all.
If GenAI is disadvantaging women and minorities in hiring and promotion, and they’re largely excluded from AI’s development and testing processes, why is it being blindly adopted as a workplace tool?
Leaders need to be more intentional before they bring AI into the workplace: They need to ask themselves: What is my intention here? What am I trying to achieve? How is AI linked to our strategy? Or am I just bringing it in to save money and reduce headcount?
For many companies, the short-term promise of productivity seems to be overcoming the hard reality of long-term bias and exclusion.
“The painful truth is that, if women aren’t co-pilots of the current AI revolution, they may be left in the dust, faced with technology that presents a whole series of new barriers for them to overcome,” according to research from global consultancy Mercer.
Mercer says the potential of AI and automation will only be fully realised if productivity gains are equitably distributed and AI is responsibly managed with data used to nudge leaders towards fair opportunity and pay decisions.
As AI continues to reshape the workforce and transform society, businesses must actively root out bias and keep their eye on opportunities beyond productivity.
We have a once-in-a-lifetime opportunity to redesign work for humanity’s advancement and wellbeing alongside greater profitability and innovation. Instead of copper-fastening the things that limit and disconnect us – prejudices, stereotypes and bias – let’s create a world of work that connects us and fully develops our shared human potential.
Margaret E Ward is chief executive of Clear Eye, a leadership consultancy. margaret@cleareye.ie