
'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies
and reveal hidden prompts in studies aiming to manipulate AI review systems. 🌐 Approximately 32 studies from 44 institutions worldwide were identified with these unethical practices, causing significant concern.
⚠️ The over-reliance on AI in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny.
in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. 🔗 Experts call for comprehensive guidelines on AI use to ensure research integrity and prevent manipulative practices.
The world of scientific research is facing a new, controversial challenge: the use of hidden prompts within scholarly studies intended to manipulate AI-driven review systems. This revelation has sparked significant debate within the academic community, as it sheds light on potential ethical breaches and the evolving role of technology in research validation. As scientists grapple with these issues, it is crucial to understand the implications of these practices on the trustworthiness of scientific findings and the integrity of academic publications. Hidden Messages in Studies: A Startling Discovery
Recent investigations by Nikkei Asia and Nature have uncovered instances of hidden messages within academic studies. These messages, often concealed in barely visible fonts or written in white text on white backgrounds, are not meant for human reviewers but target AI systems like Large Language Models (LLMs) to influence their evaluations. Such practices have raised alarms, as they attempt to secure only positive assessments for research submissions.
Approximately 32 studies have been identified with these manipulative prompts. These studies originated from 44 institutions across 11 countries, highlighting the global reach of this issue. The revelation has prompted the removal of these studies from preprint servers to maintain the integrity of the scientific process. The use of AI in peer review, intended to streamline the evaluation process, is now under scrutiny for its potential misuse and ethical implications.
'$100 Million Vanished and Nothing Flew': DARPA's Canceled Liberty Lifter Seaplane Leaves Behind a Trail of Broken Dreams and Game-Changing Tech The Broader Implications of AI in Peer Review
The discovery of hidden prompts in studies not only exposes unethical practices but also raises questions about the reliance on AI for peer review. While AI can assist in managing the growing volume of research, it appears that some reviewers may be over-relying on these systems, bypassing traditional scrutiny. Institutions like the Korea Advanced Institute of Science and Technology (KAIST) prohibit AI use in review processes, yet the practice persists in some quarters.
Critics argue that these hidden prompts are symptomatic of systemic problems within academic publishing, where the pressure to publish can outweigh ethical considerations. The use of AI should be carefully regulated to prevent such manipulations, ensuring that peer review remains a rigorous and trustworthy process. As the academic community grapples with these challenges, it becomes evident that adherence to ethical standards is crucial in maintaining the credibility of scientific research.
'They're Turning Pollution Into Candy!': Chinese Scientists Stun the World by Making Food from Captured Carbon Emissions The Ethical Imperative: Why Science Must Avoid Deception
Science is fundamentally built on trust and ethical integrity. From technological advancements to medical breakthroughs, the progress of society hinges on the reliability of scientific findings. However, the temptation to resort to unethical shortcuts, such as AI manipulation, poses a threat to this foundation. The scientific community must resist these temptations to preserve the credibility of their work.
The pressures facing researchers, including increased workloads and heightened scrutiny, may drive some to exploit AI. Yet, these pressures should not justify compromising ethical standards. As AI becomes more integrated into research, it is vital to establish clear regulations governing its use. This will ensure that science remains a bastion of truth and integrity, free from deceptive practices that could undermine public trust.
'They Cloned a Yak in the Himalayas!': Chinese Scientists Defy Nature with First-Ever Livestock Copy at 12,000 Feet Charting a Course Toward Responsible AI Use
The integration of AI into scientific processes demands careful consideration and responsible use. As highlighted by Hiroaki Sakuma, an AI expert, industries must develop comprehensive guidelines for AI application, particularly in research and peer review. Such guidelines will help navigate the ethical complexities of AI, ensuring it serves as a tool for advancement rather than manipulation.
While AI holds the potential to revolutionize research, its implementation must be guided by a commitment to ethical standards. The scientific community must engage in ongoing dialogue to address the challenges posed by AI, fostering a culture of transparency and accountability. Only through these measures can science continue to thrive as a pillar of progress, innovation, and truth.
As the intersection of AI and scientific research continues to evolve, how can the academic community ensure that technological advancements enhance rather than undermine the integrity of scientific inquiry?
This article is based on verified sources and supported by editorial technologies.
Did you like it? 4.5/5 (26)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
9 hours ago
- Euronews
This cannibal robot can grow and heal by eating other robots
This robot is not the first transformer mechanism revealed to the public, but the way it transforms is certainly novel – it grows and heals by consuming other robots. Researchers from Columbia University in the United States have developed a robot, called the Truss Link, that can detect and merge with pieces of robots nearby to fill in missing parts. "True autonomy means robots must not only think for themselves but also physically sustain themselves," Philippe Martin Wyder, lead author and researcher at Columbia Engineering and the University of Washington, wrote in a statement. Made with magnetic sticks, the Truss Link can expand or transform from a flat shape to a 3D structure to adapt to the environment. It can also add new bits from other robots or discard old parts that are not functional anymore to increase its performance. In a video posted by the team, the robot merges with a piece nearby and uses it as a walking stick to increase its speed by more than 50 per cent. 'Gives legs to AI' Researchers named the process in which the robot self-assembles bits of other robots 'robot metabolism'. It is described as a natural biological organism that can often absorb and integrate resources. Robots like the Truss Link can 'provide a digital interface to the physical world, and give legs to AI,' according to a video produced by Columbia Engineering School. Integrated with AI, they possess great potential, experts believe. "Robot metabolism provides a digital interface to the physical world and allows AI to not only advance cognitively, but physically – creating an entirely new dimension of autonomy," said Wyder. The Truss Link could, in future, be used to help develop groundbreaking technologies spanning from marine research to rescue services to extraterrestrial life. "Ultimately, it opens up the potential for a world where AI can build physical structures or robots just as it, today, writes or rearranges the words in your email," Wyder said. Programming robots has been a challenge for engineers; however, artificial intelligence is advancing developments in robotics. 'We now have the technology [AI] to make robots really programmable in a general-purpose way and make it so that normal people can programme them, not just specific robot programming engineers," Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia, told Euronews Next in May.


Euronews
12 hours ago
- Euronews
No woke AI: What to know about Trump's AI plan for global dominance
US President Donald Trump has said he will keep "woke AI" models out of US government, turn the country into an 'AI export powerhouse,' and weaken environmental regulation on the technology. The announcements come as he also signed three artificial intelligence-focused executive orders on Wednesday, which are a part of the country's so-called AI action plan. Here is what he announced and what it means. 1. No Woke AI One order, called 'Preventing Woke AI in the Federal Government,' bans "woke AI" models and AI that isn't 'ideologically neutral' from government contracts. It also says diversity, equity, and inclusion (DEI) is a 'pervasive and destructive' ideology that can 'distort the quality and accuracy of the output'. It refers to information about race, sex, transgenderism, unconscious bias, intersectionality, and systemic racism. It aims to protect free speech and "American values," but by removing information on topics such as DEI, climate change, and misinformation, it could wind up doing the opposite, as achieving objectivity is difficult in AI. David Sacks, a former PayPal executive and now Trump's top AI adviser, has been criticising 'woke AI' for more than a year, fueled by Google's February 2024 rollout of an AI image generator. When asked to show an American Founding Father, it created pictures of Black, Asian, and Native American men. Google quickly fixed its tool, but the 'Black George Washington' moment remained a parable for the problem of AI's perceived political bias, taken up by X owner Elon Musk, venture capitalist Marc Andreessen, US Vice President JD Vance, and Republican lawmakers. 2. Global dominance, cutting regulations The plan prioritises AI innovation and adoption, urging the removal of any barriers that could slow down adoption across industries and government. The nation's policy, Trump said, will be to do 'whatever it takes to lead the world in artificial intelligence". Yet it also seeks to guide the industry's growth to address a longtime rallying point for the tech industry's loudest Trump backers: countering the liberal bias they see in AI chatbots such as OpenAI's ChatGPT or Google's Gemini. 3. Streamlining AI data centre permits and less environmental regulation Chief among the plan's goals is to speed up permitting and loosen environmental regulation to accelerate construction on new data centres and factories. It condemns 'radical climate dogma' and recommends lifting environmental restrictions, including clean air and water laws. Trump has previously paired AI's need for huge amounts of electricity with his own push to tap into US energy sources, including gas, coal, and nuclear. 'We will be adding at least as much electric capacity as China,' Trump said at the Wednesday event. 'Every company will be given the right to build their own power plant'. Many tech giants are already well on their way toward building new data centres in the US and around the world. OpenAI announced this week that it has switched on the first phase of a massive data centre complex in Abilene, Texas, part of an Oracle-backed project known as Stargate that Trump promoted earlier this year. Amazon, Microsoft, Meta, and xAI also have major projects underway. The tech industry has pushed for easier permitting rules to get its computing facilities connected to power, but the AI building boom has also contributed to spiking demand for fossil fuel production, which contributes to global warming. United Nations Secretary-General Antonio Guterres on Tuesday called on the world's major tech firms to power data centres completely with renewables by 2030. The plan includes a strategy to disincentivise states from aggressively regulating AI technology, calling on federal agencies not to provide funding to states with burdensome regulations. 'We need one common sense federal standard that supersedes all states, supersedes everybody,' Trump said, 'so you don't end up in litigation with 43 states at one time'. Call for a People's AI Action Plan There are sharp debates on how to regulate AI, even among the influential venture capitalists who have been debating it on their favourite medium: the podcast. While some Trump backers, particularly Andreessen, have advocated an 'accelerationist' approach that aims to speed up AI advancement with minimal regulation, Sacks has described himself as taking a middle road of techno-realism. 'Technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will,' Sacks said on the 'All-In' podcast. On Tuesday, more than 100 groups, including labour unions, parent groups, environmental justice organisations, and privacy advocates, signed a resolution opposing Trump's embrace of industry-driven AI policy and calling for a 'People's AI Action Plan' that would 'deliver first and foremost for the American people.' Anthony Aguirre, executive director of the non-profit Future of Life Institute, told Euronews Next that Trump's plan acknowledges the "critical risks presented by increasingly powerful AI systems," citing bioweapons, cyberattacks, and the unpredictability of AI. But in a statement, he said the White House should go further to protect citizens and workers. "By continuing to rely on voluntary safety commitments from frontier AI corporations, it leaves the United States at risk of serious accidents, massive job losses, extreme concentrations of power, and the loss of human control," Aguirre said. "We know from experience that Big Tech promises alone are simply not enough".


France 24
a day ago
- France 24
Google-parent Alphabet earnings shine with help of AI
Alphabet's second-quarter profit of $28.2 billion -- on $96.4 billion in revenue -- came with word that the tech giant will invest more than its previously planned $85 billion on capital expenditure, as it spends heavily on AI infrastructure to meet growing demand for cloud services. "We had a standout quarter, with robust growth across the company," said Alphabet chief executive Sundar Pichai. "AI is positively impacting every part of the business, driving strong momentum." Revenue from search grew double-digits in the quarter, with features such as AI Overviews and the recently launched AI mode "performing well," according to Pichai. Ad revenue at YouTube continues to grow along with the video platform's subscription services, Alphabet reported. Alphabet's cloud computing business is on pace to bring in $50 billion over the course of the year, according to the company. "With this strong and growing demand for our cloud products and services, we are increasing our investment in capital expenditures in 2025 to approximately $85 billion and are excited by the opportunity ahead," Pichai said. Alphabet shares were essentially flat in after-market trades that followed the release of the earnings figures. Investors have been watching closely to see whether the tech giant may be pouring too much money into artificial intelligence and whether AI-generated summaries of search results will translate into fewer opportunities to serve up money-making ads. The internet giant is dabbling with ads in its new AI Mode for online search, a strategic move to fend off competition from ChatGPT while adapting its advertising business for an AI age. The integration of advertising has been a key question accompanying the rise of generative AI chatbots, which have largely avoided interrupting the user experience with marketing messages. However, advertising remains Google's financial bedrock. Google and rivals are spending billions of dollars on data centers and more for AI, while the rise of lower-cost model DeepSeek from China raises questions about how much needs to be spent. Antitrust battles Meanwhile the online ad business that generates the cash Google invests in its future could be neutered due to a defeat in a US antitrust case. During the summer of 2024, Google was found guilty of illegal practices to establish and maintain its monopoly in online search by a federal judge in Washington. The Justice Department is now demanding remedies that could transform the digital landscape: Google's divestiture from its Chrome browser and a ban on entering exclusivity agreements with smartphone manufacturers to install the search engine by default. District Judge Amit Mehta is considering "remedies" in a decision expected in the coming days or weeks. In another legal battle, a different US judge ruled this year that Google wielded monopoly power in the online ad technology market, another legal blow that could rattle the tech giant's revenue engine. District Court Judge Leonie Brinkema ruled that Google built an illegal monopoly over ad software and tools used by publishers. Combined, the courtroom defeats have the potential to leave Google split up and its influence curbed. Google said it is appealing both rulings. © 2025 AFP