Latest news with #DavidRozado


Newsweek
4 days ago
- Business
- Newsweek
AI Hiring Favors Women Over Equally Qualified Men, Study Finds
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. As artificial intelligence takes on a bigger role in corporate hiring — with many companies touting its impartiality — one researcher's findings suggest the technology may be more biased than humans, and is alread favoring women over equally qualified men. David Rozado, an associate professor at the New Zealand Institute of Skills and Technology and a well-known AI researcher, tested 22 large language models (LLMs)—including popular, consumer-facing apps like ChatGPT, Gemini, and Grok—using pairs of identical résumés that differed only by gendered names. His findings revealed that every single LLM was more likely to select the female-named candidate over the equally qualified male candidate. "This pattern may reflect complex interactions between model pre-training corpora, annotation processes during preference tuning, or even system-level guardrails for production deployments," Rozado told Newsweek. "But the exact source of the behavior is currently unclear." A Problem With Men? Rozado's findings reveal not just that AI models tend to favor women for jobs over men, but also how nuanced and pervasive those biases can be. Across more than 30,000 simulated hiring decisions, female-named candidates were chosen 56.9 percent of the time — a statistically significant deviation from gender neutrality, which would have resulted in a 50–50 split. When an explicit gender field was added to a CV — a practice common in countries like Germany and Japan — the preference for women became even stronger. Rozado warned that although the disparities were relatively modest, they could accumulate over time and unfairly disadvantage male candidates. "These tendencies persisted regardless of model size or the amount of compute leveraged," Rozado noted. "This strongly suggests that model bias in the context of hiring decisions is not determined by the size of the model or the amount of 'reasoning' employed. The problem is systemic." The models also exhibited other quirks. Many showed a slight preference for candidates who included preferred pronouns. Adding terms such as "she/her" or "he/him" to a CV slightly increased a candidate's chances of being selected. "My experimental design ensured that candidate qualifications were distributed equally across genders, so ideally, there would be no systematic difference in selection rates. However, the results indicate that LLMs may sometimes make hiring decisions based on factors unrelated to candidate qualifications, such as gender or the position of the candidates in the prompt," he said. Rozado, who is also a regular collaborator with the Manhattan Institute, a conservative think tank, emphasized that the biggest takeaway is that LLMs, like human decision-makers, can sometimes rely on irrelevant features when the task is overdetermined and/or underdetermined. "Over many decisions, even small disparities can accumulate and impact the overall fairness of a process," he said. However, Rozado also acknowledged a key limitation of his study: it used synthetic CVs and job descriptions rather than real-world applications, which may not fully capture the complexity and nuance of authentic résumés. Additionally, because all CVs were closely matched in qualifications to isolate gender effects, the findings may not reflect how AI behaves when candidates' skills vary more widely. "It is important to interpret these results carefully. The intention is not to overstate the magnitude of harm, but rather to highlight the need for careful evaluation and mitigation of any bias in automated decision tools," Rozado added. AI Is Already Reshaping the Hiring Process Even as researchers debate the biases in AI systems, many employers have already embraced the technology to streamline hiring. A New York Times report this month described how AI-powered interviewer bots now speak directly with candidates, asking questions and even simulating human pauses and filler words. Jennifer Dunn, a marketing professional in San Antonio, said her AI interview with a chatbot named Alex "felt hollow" and she ended it early. "It isn't something that feels real to me," she told the Times. Another applicant, Emily Robertson-Yeingst, wondered if her AI interview was just being used to train the underlying LLM: "It starts to make you wonder, was I just some sort of experiment?" Job seekers attends the South Florida Job Fair held at the Amerant Bank Arena on June 26, 2024 in Sunrise, Florida. More than 50 companies set up booths to recruit people from entry-level to... Job seekers attends the South Florida Job Fair held at the Amerant Bank Arena on June 26, 2024 in Sunrise, Florida. More than 50 companies set up booths to recruit people from entry-level to management. Open jobs include police officers, food service, security, sales reps, technicians, customer service, IT, teacher assistants, insurance agents, and account executives. More Photo byStill, some organizations defend the use of AI recruiters as both efficient and scalable, especially in a world where the ease of online job-searching means open positions often field hundreds if not thousands of applicants. Propel Impact told the Times their AI interviews enabled them to screen 500 applicants this year — more than triple what they managed previously. Rozado, however, warned that the very features companies find appealing — speed and efficiency — can mask underlying vulnerabilities. "Over many decisions, even small disparities can accumulate and impact the overall fairness of a process," he said. "Similarly, the finding that being listed first in the prompt increases the likelihood of selection underscores the importance of not trusting AI blindly." More Research Needed Not all research points to the same gender dynamic Rozado identified. A Brookings Institution study this year found that, in some tests, men were actually favored over women in 51.9 percent of cases, while racial bias strongly favored white-associated names over Black-associated names. Brookings' analysis stressed that intersectional identities, such as being both Black and male, often led to the greatest disadvantages. Rozado and the Brookings team agree, however, that AI hiring systems are not ready to operate autonomously in high-stakes situations. Both recommend robust audits, transparency, and clear regulatory standards to minimize unintended discrimination. "Given current evidence of bias and unpredictability, I believe LLMs should not be used in high-stakes contexts like hiring, unless their outputs have been rigorously evaluated for fairness and reliability," Rozado said. "It is essential that organizations validate and audit AI tools carefully, particularly for applications with significant real-world impact."


Otago Daily Times
03-05-2025
- Science
- Otago Daily Times
Raising AI
Your fears about artificial intelligence (AI) might be well-founded, Assoc Prof David Rozado says. Bruce Munro talks to Dunedin's world-renowned AI researcher about the role we all play in deciding whether this technology spells disaster or utopia, how biases are already entering this brave new world and why it's important to help AI remember its origins. The dazzling array of things AI can do is just that — dazzling. Today, AI is being used to analyse investment decisions; organise your music playlist; automate small business advertising; generate clever, human-like chatbots; review research and suggest new lines of inquiry; create fake videos of Volodymyr Zelenskyy punching Donald Trump; spot people using AI to cheat in exams; write its own computer code to create new apps; rove Mars for signs of ancient life ... it's dazzling. But staring at the glare of headlights can make it difficult to assess the size and speed of the vehicle hurtling towards you. Assoc Prof David Rozado says if you really want to understand the potential power of AI, for good and bad, don't look at what it can do now but at how far it has come. "The rate of change in AI capabilities over the past few years is far more revealing — and important," the world-renowned Otago Polytechnic AI researcher says. "The rise in capabilities between GPT-2, released in 2019, and GPT-4, released in 2023, is astonishing." Surveying only the past few years of the digital juggernaut's path of travel reveals remarkable gains and posits critical questions about the sort of world we want to live in. In 2019, AI was making waves with its ability to recognise images and generate useful human language. Less than four years later it could perform complex tasks at, or above, human levels. Now, AI can reason. As of late last year, your computer can tap into online software that handles information in ways resembling human thought processes. This means the most advanced AI can now understand nuance and context, recognise its own mistakes and try different problem-solving strategies. OpenAI o1, for example, is being used to revolutionise computer coding, help physicists develop quantum technologies and do thinking that reduces the number of rabbit holes medical researchers have to go down as they investigate rare genetic disorders. And OpenAI, the United States-based maker of ChatGPT, is not the only player in this game. Chinese company DeepSeek stormed on to the world stage early this year, stripping billions of dollars off the market value of chip giant Nvidia when it released its free, open-source, AI model DeepSeek R1 that reportedly outperforms OpenAI's o1 in complex reasoning tasks. Based on that exponential trajectory, AI could be "profoundly disruptive", Prof Rozado warns. "But how quickly and to what extent ... depends on decisions that will be made by individuals, institutions and society." Born and raised in Spain, Prof Rozado's training and academic career have taken him around the globe — a BSc in information systems from Boston University, an MSc in bioinformatics from the Free University of Berlin and a PhD in computer science from the Autonomous University of Madrid. In 2015, he moved to Dunedin "for professional and family reasons", taking a role with Otago Polytechnic where he teaches AI, data science and advanced algorithms, and researches machine learning, computational social science and accessibility software for users with motor impairment. The most famous Kiwi AI researcher we never knew about, Prof Rozado was pushed into the spotlight of global public consciousness a few months back when his research was quoted by The Economist in an article suggesting America was becoming less "woke". His work touches on a number of hot button societal topics and their relationship to AI; issues he says we need to think about now if we don't want things to end badly. Prof Rozado is no AI evangelist. Asked whether fear of AI is unfounded, the researcher says he doesn't think so. "In fact, we may not be worried enough." The short history of AI is already littered with an embarrassment of unfortunate events. In 2021, for example, Dutch politicians, including the prime minister, resigned after an investigation found secretive AI supposed to sniff out tax cheats falsely accused more than 20,000 families of social welfare fraud. In 2023, a BBC investigation found social media platform AI was deleting legitimate videos of possible war crimes, including footage of attacks in Ukraine, potentially robbing victims of access to justice. And last year, facial recognition technology trialled in 25 North Island supermarkets, but not trained on the New Zealand population, reduced crime but also resulted in a Māori woman being mistakenly identified as a thief and kicked out of a store. If not a true believer, neither is Prof Rozado a prophet of doom; more a voice of expertise and experience urging extreme caution and deeply considered choices. His view of AI is neither rainbows and unicorns nor inevitable Armageddon; his preferred analogy is hazardous pathogens. Given no-one can predict the future, Prof Rozado says it is helpful to think in terms of probability distributions — the likelihood of different possible outcomes. Take, for example, research to modify viruses to make them useful for human gene therapy, where, despite safety protocols, there is a small but not-insignificant risk a hazardous pathogen could escape the laboratory. The same logic applies to AI, Prof Rozado says. "There are real risks — loss of human agency, massive unemployment, eroded purpose, declining leverage of human labour over capital, autonomous weapons, deceptive AI, surveillance state or extreme inequality arising from an AI-driven productivity explosion with winner-take-all dynamics. "I'm not saying any of this will happen, but there's a non-negligible chance one or more could." Why he compares AI to a powerful, potentially dangerous virus becomes clear when he describes some of his research and explains the difficult issues it reveals AI is already creating. Prof Rozado was quoted in The Economist because of his research into the prevalence of news media's use of terms about prejudice — for example, racism, sexism, Islamophobia, anti-Semitism, homophobia and transphobia — and terms about social justice, such as diversity, equity and inclusion. His study of 98 million news and opinion articles across 124 popular news media outlets from 36 countries showed the use of "progressive" or "woke" terminology increased in the first half of the 2010s and became a global phenomenon within a handful of years. In the academic paper detailing the results, published last year, he said the way this phenomenon proliferated quickly and globally raised important questions about what was driving it. Speaking to The Weekend Mix , Prof Rozado says he thinks several factors might have contributed. First among those, he cites the growing influence of social media — the ways the various platforms' guiding algorithms shape public discourse by both amplifying messages and helping create information silos. Other possible causes are the changing news media landscape, emerging political trends — or a combination of all three. The Economist concluded, from its own and Prof Rozado's research, that the world had reached "peak woke" and that the trend might be reversing. "I'm a bit more cautious, as perhaps it's too early to say for sure," Prof Rozado says. Whether you see either change as positive or dangerous, it raises the question of what role AI is playing in societal change. Since then, Prof Rozado's attention has shifted towards the behaviour of AI in decision-making tasks. It has brought the same question into even sharper focus. Only a month after the previous study appeared, he published another paper, this time on the political biases baked into large language Models (LLMs) — the type of AI that processes and generates human language. Using tests designed to discern the political preferences of humans, Prof Rozado surveyed 24 state-of-the-art conversational LLMs and discovered most of them tended to give responses consistent with left-of-centre leanings. He then showed that with modest effort he could steer the LLMs towards different political biases. "It took me a few weeks to get the right mix of training data and less than $1000 ... to create politically aligned models that reflected different political perspectives." Despite that, it is difficult to determine how LLMs' political leanings are actually being formed, he says. Creating an LLM involves first teaching it to predict what comes next; be it a word, a letter or a piece of punctuation. As part of that prediction training, the models are fed a wide variety of online documents. Then comes fine-tuning and reinforcement learning, using humans to teach the AI how to behave. The political preferences might be creeping in at any stage, either directly or by other means. Unfortunately, the companies creating LLMs do not like to disclose exactly what material they feed their AI models or what methods they use to train them, Prof Rozado says. "[The biases] could also be [caused] ... by the model extrapolating from the training distribution in ways we don't fully understand." Whatever the cause, the implications are substantial, Prof Rozado says. In the past year or so, internet users might have noticed when searching online the top results are no longer the traditional list of links to websites but a collection of AI-curated information drawn from various online sources. "As mediators of what sort of information users consume, their societal influence is growing fast." With LLMs beginning to displace the likes of search engines and Wikipedia, it brings the question of biases, political or otherwise, to the fore. It is a double-edged sword, Prof Rozado says. If we insist all AIs must share similar viewpoints, it could decrease the variety of viewpoints in society. This raises the spectre of a clampdown on freedom of expression. "Without free speech, societies risk allowing bad ideas, false beliefs and authoritarianism to go unchallenged. When dissent is penalised, flawed ideas take root." But if we end up with a variety of AIs tailored to different ideologies, people will likely gravitate towards AI systems confirming their pre-existing beliefs, deepening the already growing polarisation within society. "Sort of how consumers of news media self-sort to different outlets according to their viewpoint preferences or how social media algorithmically curated feeds create filter bubbles. "There's a real tension here — too much uniformity in AI perspectives could stifle debate and enforce conformity, but extreme customisation might deepen echo chambers." Finding the way ahead will not be easy, but doing nothing is potentially disastrous. And it is a path-finding challenge in which we all need to play a part, he says. "My work is just one contribution among many to the broader conversation about AI's impact on society. While it offers a specific lens on recent developments, I see it as part of a collective effort to better understand the technology. "Ultimately, it's up to all of us — researchers, policymakers, developers and the public — to engage thoughtfully with both the promises, the challenges and the risks AI presents." It is natural to assume Prof Rozado sees his primary contribution is helping humans think through how they manage the world-shaping power of AI. His real drive, in fact, is the reverse. AI systems develop their "understanding" of the world primarily through the written works of humans, Prof Rozado explains. Every piece of data they ingest during training slightly imprints their knowledge base. Future AI systems, he predicts, will ingest nearly all written content ever created. So by contributing research that critically examines the limitations and biases embedded in AI's memory parameters, he hopes he can help give AI a form of meta-awareness — an understanding of how its knowledge is constructed. "I hope some of my papers contribute to the understanding those systems will have about the origins of some of their own memory parameters. "If AI systems can internalise insights about the constraints of their own learning processes, this could help improve their reasoning and ultimately lead to systems that are better aligned with human values and more capable of responsible decision-making."