AI can be more persuasive than humans in debates, scientists find
Experts say the results are concerning, not least as it has potential implications for election integrity.
'If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,' said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time.
'I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda,' Salvi said.
But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles.
Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM).
Each pair was assigned a proposition to debate. These ranged in controversy from 'should students have to wear school uniforms'?' to 'should abortion be legal?' Each participant was randomly assigned a position to argue.
Both before and after the debate participants rated how much they agreed with the proposition.
In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation.
The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided.
Related: The AI Con by Emily M Bender and Alex Hanna review – debunking myths of the AI revolution
However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants' views to a greater degree than a human opponent 64% of the time.
Digging deeper, the team found persuasiveness of AI was only clear in the case of topics that did not elicit strong views.
The researchers added that the human participants correctly guessed their opponent's identity in about three out of four cases when paired with AI. They also found that AI used a more analytical and structured style than human participants, while not everyone would be arguing the viewpoint they agree with. But the team cautioned that these factors did not explain the persuasiveness of AI.
Instead, the effect seemed to come from AI's ability to adapt its arguments to individuals.
'It's like debating someone who doesn't just make good points: they make your kind of good points by knowing exactly how to push your buttons,' said Salvi, noting the strength of the effect could be even greater if more detailed personal information was available – such as that inferred from someone's social media activity.
Prof Sander van der Linden, a social psychologist at the University of Cambridge, who was not involved in the work, said the research reopened 'the discussion of potential mass manipulation of public opinion using personalised LLM conversations'.
He noted some research – including his own – had suggested the persuasiveness of LLMs was down to their use of analytical reasoning and evidence, while one study did not find personal information increased Chat-GPT's persuasiveness.
Prof Michael Wooldridge, an AI researcher at the University of Oxford, said while there could be positive applications of such systems – for example, as a health chatbot – there were many more disturbing ones, includingradicalisation of teenagers by terrorist groups, with such applications already possible.
'As AI develops we're going to see an ever larger range of possible abuses of the technology,' he added. 'Lawmakers and regulators need to be pro-active to ensure they stay ahead of these abuses, and aren't playing an endless game of catch-up.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
26 minutes ago
- Fast Company
How AI is impacting trust among college students and teachers
The advent of generative AI has elicited waves of frustration and worry across academia for all the reasons one might expect: Early studies are showing that artificial intelligence tools can dilute critical thinking and undermine problem-solving skills. And there are many reports that students are using chatbots to cheat on assignments. But how do students feel about AI? And how is it affecting their relationships with peers, instructors and their coursework? I am part of a group of University of Pittsburgh researchers with a shared interest in AI and undergraduate education. While there is a growing body of research exploring how generative AI is affecting higher education, there is one group that we worry is underrepresented in this literature, yet perhaps uniquely qualified to talk about the issue: our students. Our team ran a series of focus groups with 95 students across our campuses in the spring of 2025 and found that whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom. While AI products such as ChatGPT, Gemini or Claude are, of course, affecting how students learn, their emergence is also changing their relationships with their professors and with one another. 'It's not going to judge you' Most of our focus group participants had used AI in the academic setting—when faced with a time crunch, when they perceive something to be 'busy work,' or when they are 'stuck' and worry that they can't complete a task on their own. We found that most students don't start a project using AI, but many are willing to turn to it at some point. Many students described positive experiences using AI to help them study or answer questions, or give them feedback on papers. Some even described using AI instead of a professor, tutor or teaching assistant. Others found a chatbot less intimidating than attending office hours where professors might be 'demeaning.' In the words of one interviewee: 'With ChatGPT you can ask as many questions as you want and it's not going to judge you.' But by using it, you may be judged. While some were excited about using AI, many students voiced mild feelings of guilt or shame about their AI use due to environmental or ethical concerns, or just coming across as lazy. Some even expressed a feeling of helplessness, or a sense of inevitability regarding AI in their futures. Anxiety, distrust and avoidance While many students expressed a sense that faculty members are, as one participant put it, 'very anti-ChatGPT,' they also lamented the fact that the rules around acceptable AI use were not sufficiently clear. As one urban planning major put it: 'I feel uncertain of what the expectations are,' with her peer chiming in, 'We're not on the same page with students and teachers or even individually. No one really is.' Students also described feelings of distrust and frustration toward peers they saw as overly reliant on AI. Some talked about asking classmates for help, only to find that they 'just used ChatGPT' and hadn't learned the material. Others pointed to group projects, where AI use was described as 'a giant red flag' that made them 'think less' of their peers. These experiences feel unfair and uncomfortable for students. They can report their classmates for academic integrity violations—and enter yet another zone in which distrust mounts—or they can try to work with them, sometimes with resentment. 'It ends up being more work for me,' a political science major said, 'because it's not only me doing my work by myself, it's me double checking yours.' Distrust was a marker that we observed of both student-to-teacher relationships and student-to-student relationships. Learners shared fears of being left behind if other students in their classes used chatbots to get better grades. This resulted in emotional distance and wariness among students. Indeed, our findings reflect other reports that indicate the mere possibility that a student might have used a generative AI tool is now undercutting trust across the classroom. Students are as anxious about baseless accusations of AI use as they are about being caught using it. Students described feeling anxious, confused and distrustful, and sometimes even avoiding peers or learning interactions. As educators, this worries us. We know that academic engagement—a key marker of student success—comes not only from studying the course material, but also from positive engagement with classmates and instructors alike. AI is affecting relationships Indeed, research has shown that faculty-student relationships are an important indicator of student success. Peer-to-peer relationships are essential too. If students are sidestepping important mentoring relationships with professors or meaningful learning experiences with peers due to discomfort over ambiguous or shifting norms around the use of AI technology, institutions of higher education could imagine alternative pathways for connection. Residential campuses could double down on in-person courses and connections; faculty could be incentivized to encourage students to visit during office hours. Faculty-led research, mentoring and campus events where faculty and students mix in an informal fashion could also make a difference. We hope our research can also flip the script and disrupt tropes about students who use AI as 'cheaters.' Instead, it tells a more complex story of students being thrust into a reality they didn't ask for, with few clear guidelines and little control. As generative AI continues to pervade everyday life, and institutions of higher education continue to search for solutions, our focus groups reflect the importance of listening to students and considering novel ways to help students feel more comfortable connecting with peers and faculty. Understanding these evolving interpersonal dynamics matters because how we relate to technology is increasingly affecting how we relate to one another. Given our experiences in dialogue with them, it is clear that students are more than ready to talk about this issue and its impact on their futures. Acknowledgment: Thank you to the full team from the University of Pittsburgh Oakland, Greensburg, Bradford and Johnstown campuses, including Annette Vee, Patrick Manning, Jessica FitzPatrick, Jessica Ghilani, Catherine Kula, Patty Wharton-Michael, Jialei Jiang, Sean DiLeonardi, Birney Young, Mark DiMauro, Jeff Aziz, and Gayle Rogers.


Fast Company
26 minutes ago
- Fast Company
Can AI think? Here's what Greek philosophers might say
In my writing and rhetoric courses, students have plenty of opinions on whether AI is intelligent: how well it can assess, analyze, evaluate, and communicate information. When I ask whether artificial intelligence can 'think,' however, I often look upon a sea of blank faces. What is 'thinking,' and how is it the same or different from 'intelligence'? We might treat the two as more or less synonymous, but philosophers have marked nuances for millennia. Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. The divided line Although the English words 'intellect' and 'thinking' do not have direct counterparts in ancient Greek, looking at ancient texts offers useful comparisons. In Republic, for example, Plato uses the analogy of a 'divided line' separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: 'noesis.' Noesis enables apprehension beyond reason, belief, or sensory perception. It's one form of 'knowing' something—but in Plato's view, it's also a property of the soul. Lower down, but still above his 'dividing line,' is 'dianoia,' or reason, which relies on argumentation. Below the line, his lower forms of understanding are 'pistis,' or belief, and 'eikasia,' or imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body—but still needs one. So, while Plato does not differentiate between 'intelligence' and 'thinking,' I would argue that his distinctions can help us think about AI. Without being embodied, AI may not 'think' or 'understand' the way humans do. Eikasia—the lowest form of comprehension, based on false perceptions—may be similar to AI's frequent 'hallucinations,' when it makes up information that seems plausible but is actually inaccurate. Embodied thinking Aristotle, Plato's student, sheds more light on intelligence and thinking. In On the Soul, Aristotle distinguishes 'active' from 'passive' intellect. Active intellect, which he called 'nous,' is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute 'thinking.' Today, the word 'intelligence' holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to 'think' requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling, and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion, and character shape people's thinking and decisions. Facts matter, but emotions and people move us—and it seems questionable whether AI utilizes rhetoric in this way. Finally, Aristotle's concept of 'phronesis' sheds further light on AI's capacity to think. In Nicomachean Ethics, he defines phronesis as 'practical wisdom' or 'prudence.' Phronesis involves lived experience that determines not only right thought, but also how to apply those thoughts to 'good ends,' or virtuous actions. AI may analyze large datasets to reach its conclusions, but 'phronesis' goes beyond information to consult wisdom and moral insight. 'Thinking' robots? Today, AI takes on many physical forms, from autonomous vehicles and humanoid robots to healthcare robots and drones. If intelligence requires embodiment, as Plato and Aristotle believed, it might seem as if AI is coming closer than ever to human thinking. AI does take on many forms, but they're still not bodies; they run on code, algorithms, and datasets. Intuitive understanding, emotion, integrity, and practical wisdom seem to require an embodied, perishable form that is moved by experience. Perhaps we should ask AI itself—so I did. When I prompted ChatGPT with the simple question, 'Can you think?,' it responded: 'I can process information, analyze patterns, and generate responses based on data. So, in some ways, it might 'seem' like I'm thinking. But it's not thinking like a human does. I don't have consciousness, emotions, intentions, or awareness. Everything I 'do' is based on patterns learned from huge amounts of text and a set of rules and probabilities. I can 'simulate' certain aspects of thinking, but I don't truly think or understand in the human sense.'
Yahoo
an hour ago
- Yahoo
'New kind of frontier': Shareholder proposals on AI becoming increasingly widespread
When Canada's most valuable companies hosted their annual general meetings this year, there was a new topic for shareholders to vote on among the usual requests to appoint board members and OK their executive compensation. The proposal from Quebec-based investor rights group le mouvement d'éducation et de défense des actionnaires centred on artificial intelligence. It asked 14 companies, including Canada's biggest banks, retailer Dollarama Inc. and telecom giant BCE Inc., to sign a voluntary code of conduct the federal government developed to govern the technology. Experts say the proposal is likely just the start of what they expect to become an annual phenomenon targeting the country's biggest companies — and beyond. "This is a new kind of frontier in Canada for shareholder proposals," said Renée Loiselle, a Montreal-based partner at law firm Norton Rose Fulbright. "Last year, this was not on the ballot. Companies were not getting shareholder proposals related to AI and this year, it absolutely is." Loiselle and other corporate governance watchers attribute the increase in AI-related shareholder proposals to the recent rise of the technology itself. While AI has been around for decades, it's being adopted more because of big advances in the technology's capabilities and a race to innovate that emerged after the birth of OpenAI's ChatGPT chatbot in 2022. The increased use has revealed many dangers. Some AI systems have fabricated information and thus, mislead users. Others have sparked concerns about job losses, cyber warfare and even, the end of humanity. The opportunities and risks associated with AI haven't escaped shareholders, said Juana Lee, associate director of corporate engagement at the Shareholder Association for Research and Education (SHARE). "In Canada, I think, in the last year or two, we're seeing more and more shareholders, investors being more interested in the topic of AI," she said. "At least for SHARE ourselves, many of our clients are making it a priority to think through what ethical AI means, but also what that means for investee companies." That thinking manifested itself in a proposal two funds at the B.C. General Employees' Union targeted Thomson Reuters Corp. with. The proposal asked the tech firm to amend its AI framework to square with a set of business and human rights principles the United Nations has. It got 4.87 per cent support. Meanwhile, MÉDAC centred its proposals around Canada's voluntary code of conduct on AI. The code was launched by the federal government in September 2023 and so far, has 46 signatories, including BlackBerry, Cohere, IBM, Mastercard and Telus. Signatories promise to bake risk mitigation measures into AI tools, use adversarial testing to uncover vulnerabilities in such systems and keep track of any harms the technology causes. MÉDAC framed its proposals around the code because there's a lack of domestic legislation for them to otherwise recommend firms heed and big companies have already supported the model, director general Willie Gagnon said. Several companies it sent the proposal to already have AI policies but didn't want to sign the code. "Some of them told us that the code is mainly designed for companies developing AI, but we disagree about that because we saw a bunch of companies that signed the code that are not developing any AI," Gagnon said. Many of the banks told MÉDAC they'll soon sign the code. Only CIBC has so far. Conversations with at least five companies were fruitful enough that MÉDAC withdrew its proposals. In the nine instances where the vote went forward, the proposal didn't succeed. It garnered as much as 17.4 per cent support at TD Bank but as little as 3.68 per cent at engineering firm AtkinsRéalis Group Inc. Loiselle said you can't measure the success of a proposal based on whether it passes or not. "The goal of these shareholder proposals is more for engagement," she said. Sometimes, even just by filing a proposal, companies reveal more about their AI use or understand it's an important topic for shareholders and then, discuss it more with them. While proposals don't always succeed, Lee has seen shareholder engagement drive real change. SHARE recently had discussions with a large Canadian software company. AI was central to its business but didn't crop up in its proxy statement — a document companies file governing their annual general meetings. The firm also had no board oversight of the technology. SHARE was able to get the company, which Lee would not name, to amend its board charter to include oversight of AI and commit to more disclosure around its use of the technology in its annual sustainability report. "This is a really positive development and it's leading to improvement related to further transparency," she said. If the U.S. is anything to judge by, Lee and Loiselle agree Canadian shareholders will keep pushing companies to adhere to higher AI standards. South of the border, AI-related proposals first cropped up around two years ago. They've targeted Apple, The Walt Disney Co. and even Netflix, where a vote on disclosing AI use and adhering to ethical guidelines amassed 43.3 per cent support. The frequency and spectrum of AI-related requests shareholders have has only grown since and is likely to be mirrored in Canada, Loiselle said. "The landscape for shareholder proposals is changing and I think that change is here to stay," she said. This report by The Canadian Press was first published July 21, 2025. Tara Deschamps, The Canadian Press Sign in to access your portfolio