
Scientific publishing needs urgent reform to retain trust in research process
Artificial intelligence will not fix this. Churning out more papers faster has got us to this place. Given current incentives, AI will mean churning them out even faster. A paper written by AI, peer-reviewed by AI and read only by AI creates a self-reinforcing loop that holds no real value, erodes trust in science and voids scientific inquiry of meaning. Research is driven by our wonder at the world. That needs to be central to any reform of scientific publishing.
Instead, the driving forces can be addressed by two measures. Incentives for researchers can and should prioritise quality over quantity, and meaning over metrics. And publishers' extortionate fees (fuelling profits of more than 30%) can and should be refused by those who pay them. Both the incentives and publishers' contracts are governed by the funders of research – universities, research councils and foundations. Their welcome attempts to engage with these problems through Plan S, which aims to make research publications open access, have not succeeded because these have been captured by publishers that twisted them to their advantage, making yet more profits.
There are examples, often beyond the global north, of scientific publishing that is not geared towards generating profits for publishers. SciELO (which is centred on Latin America) is one, and the Global Diamond Open Access Alliance champions many others. We have much to learn from them. Research is in a parlous state in the English-speaking world – at risk for the truths it tells in the US, and for its expense in Britain. Funders have the power radically to alter the incentives scientists face and to lower the rents extracted by publishers.Dan BrockingtonIcrea (Catalan Institution for Research and Advanced Studies)Paolo CrosettoGrenoble Applied Economics LaboratoryPablo Gomez BarreiroScience services and laboratories, Kew Gardens
Your article on the overwhelming volume of scientific papers rightly highlights a system under pressure. But the deeper dysfunction lies not only in quantity, but in the economics of scholarly publishing, where publishers cash in on researchers' dependence on journals for academic careers. The academic publishing market systematically diverts public research funds into shareholder profits.
Open access was meant to democratise knowledge, but its original vision has been co-opted by commercial publishers. It was BioMed Central (now Springer-Nature) that first introduced the 'author pays' model to secure revenue streams. With article processing charges (APCs) now being the dominant open-access model, authors routinely pay between £2,000 and £10,000 to publish a single article, even if the cost of producing it does not exceed £1,000.
Some of us attended the recent Royal Society conference on the future of scientific publishing, where its vice-president, Sir Mark Walport, reminded the audience that academic publishing isn't free and that if we want to remove paywalls for both authors and readers, someone must pay the bills.
We argue that there is already enough money in the system, which allows leading publishers such as Elsevier to generate profit margins of 38%. Our most recent estimates show that researchers paid close to $9bn in APCs to six publishers in 2019-23, with annual amounts nearly tripling in these five years. These most recent estimates far exceed the $1bn estimated for 2015-18 that your article cites.
As further emphasised at the Royal Society meeting, publishers monetise the current role that journal prestige plays in hiring, promotion and funding. Therefore, in order to make open access sustainable and to put a stop to these extractive business practices, it is essential to reform academic assessment and decouple it from knowledge dissemination.Stefanie HausteinAssociate Professor, School of Information Studies, University of Ottawa; Co-director, Scholarly Communications LabEric ScharesEngineering and collection analysis librarian, University Library, Iowa State UniversityLeigh-Ann ButlerScholarly communication librarian, University of OttawaJuan Pablo Alperin Associate professor, School of Publishing, Simon Fraser University; Scientific director, Public Knowledge Project
Academic publishing is creaking at the seams. Too many articles are published and too many journals don't add real value. Researchers are incentivised to publish quantity over quality, and some journal publishers benefit from this. This detracts from the excellent, world-changing and increasingly open-access research that we all need to flourish – and that quality publishers cultivate.
Generative AI only scales up these pressures, as your article shows. Something has to change. That's why Cambridge University Press has spent the last few months collaborating with researchers, librarians, publishers, funders and learned societies across the globe on a radical and pragmatic review of the open research publishing ecosystem, which we will publish in the autumn.
Focusing on generative AI or on low-quality journals alone is insufficient. We need a system-wide approach that reviews and rethinks the link between publishing, reward and recognition; equity in research dissemination; research integrity; and one that takes technological change seriously.
The system is about to break. We need creative thinking and commitment from all players to fix it and to build something better.Mandy HillManaging director, Cambridge University Press
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
2 hours ago
- Reuters
Microsoft in advanced talks for continued access to OpenAI tech, Bloomberg reports
July 29 (Reuters) - Microsoft (MSFT.O), opens new tab is in advanced talks for a deal that would give the Windows maker continued access to critical OpenAI technology in the future, Bloomberg News reported on Tuesday, citing two people familiar with the negotiations. The companies have discussed new terms that would allow Microsoft to use OpenAI's latest models and technology even if the ChatGPT maker declares it has achieved artificial general intelligence (AGI), or AI that surpasses human intelligence, the report said. A clause in OpenAI's current contract with Microsoft will shut the software giant out of some rights to the startup's advanced technology when it achieves AGI. Negotiators have been meeting regularly, and an agreement could come together in a matter of weeks, Bloomberg News reported. OpenAI did not immediately respond to a Reuters request for comment, while Microsoft declined to comment. OpenAI needs Microsoft's approval to complete its transition into a public-benefit corporation. The two have been in negotiations for months to revise the terms of their investment, including the future equity stake Microsoft will hold in OpenAI. Last month, the Information reported that Microsoft and OpenAI were at odds over the AGI clause. OpenAI is also facing a lawsuit from Elon Musk, who co-founded the company with Sam Altman in 2015 but left before it surged in popularity, accusing OpenAI of straying from its founding mission — to develop AI for the good of humanity, not corporate profit. Microsoft is set to report June-quarter earnings on Wednesday, with its relationship with OpenAI in the spotlight, as the startup turns to rivals Google (GOOGL.O), opens new tab, Oracle and CoreWeave (CRWV.O), opens new tab for cloud capacity.


BBC News
2 hours ago
- BBC News
What is AI, how do apps like ChatGPT work and why are there concerns?
Artificial intelligence (AI) has increasingly become part of everyday life over the past is being used to personalise social media feeds, spot friends and family in smartphone photos and pave the way for medical the rise of chatbots like OpenAI's ChatGPT and Meta AI has been accompanied by concern about the technology's environmental impact, ethical implications and data use. What is AI and what is it used for? AI allows computers to learn and solve problems in ways that can seem cannot think, empathise or scientists have developed systems that can perform tasks which usually require human intelligence, trying to replicate how people acquire and use programmes can process large amounts of data, identify patterns and follow detailed instructions about what to do with that information. This could be trying to anticipate what product an online shopper might buy, based on previous purchases, in order to recommend technology is also behind voice-controlled virtual assistants like Apple's Siri and Amazon's Alexa, and is being used to develop systems for self-driving also helps social platforms like Facebook, TikTok and X decide what posts to show users. Streaming services Spotify and Deezer use AI to suggest are also using AI as a way to help spot cancers, speed up diagnoses and identify new vision, a form of AI that enables computers to detect objects or people in images, is being used by radiographers to help them review X-ray results.A simple guide to help you understand AIFive things you really need to know about AI What is generative AI, and how do apps like ChatGPT and Meta AI work? Generative AI is used to create new content which may seem like it has been made by a does this by learning from vast quantities of existing data such as online text and and Chinese rival DeepSeek's chatbot are popular generative AI tools that can be used to generate text, images, code and more Gemini or Meta AI can similarly hold text conversations with like Midjourney or Veo 3, are dedicated to creating images or video from simple text prompts. Generative AI can also be used to make high-quality mimicking the style or sound of famous musicians have gone viral, sometimes leaving fans confused about their authenticity. Why is AI controversial? While acknowledging AI's potential, some experts are worried about the implications of its rapid International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen financial Geoffrey Hinton, a computer scientist regarded as one of the "godfathers" of AI development, has expressed concern that powerful AI systems could even make humans extinct - a fear dismissed by his fellow "AI godfather", Yann also highlight the tech's potential to reproduce biased information, or discriminate against some social is because much of the data used to train AI comes from public material, including social media posts or comments, which can reflect biases such as sexism or apology as AI labels black men 'primates'Twitter finds racial bias in image-cropping AIAnd while AI programmes are growing more adept, they are still prone to errors. Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as halted a new AI feature in January after it incorrectly summarised news app BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot has also faced criticism over inaccurate answers produced by its AI search has added to concerns about the use of AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into musicians and artists have also pushed back against the technology, accusing AI developers of using their work to train systems without consent or compensation. Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - signed a statement in October 2024 calling AI a "major, unjust threat" to their Eilish and Nicki Minaj want stop to 'predatory' music AIAI-written book shows why the tech 'terrifies' creatives How does AI impact the environment? It is not clear how much energy AI systems use, but some researchers estimate the industry as a whole could soon consume as much as the the powerful computer chips needed to run AI programmes also takes lots of power and for generative AI services has meant an increase in the number of data huge halls - housing thousands of racks of computer servers - use substantial amounts of energy and require large volumes of water to keep them large tech companies have invested in ways to reduce or reuse the water needed, or have opted for alternative methods such as some experts and activists fear that AI will worsen water supply problems. The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under September 2024, Google said it would reconsider proposals for a data centre in Chile, which has struggled with grids creak as AI demands soar Are there laws governing AI? Some governments have already introduced rules governing how AI EU's Artificial Intelligence Act places controls on high risk systems used in areas such as education, healthcare, law enforcement or elections. It bans some AI use AI developers in China are required to safeguard citizens' data, and promote transparency and accuracy of information. But they are also bound by the country's strict censorship the UK, Prime Minister Sir Keir Starmer has said the government "will test and understand AI before we regulate it".Both the UK and US have AI Safety Institutes that aim to identify risks and evaluate advanced AI 2024 the two countries signed an agreement to collaborate on developing "robust" AI testing in February 2025, neither country signed an international AI declaration which pledged an open, inclusive and sustainable approach to the countries including the UK are also clamping down on use of AI systems to create deepfake nude imagery and child sexual abuse who made 'depraved' child images with AI jailedInside the deepfake porn crisis engulfing Korean schools Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.


Telegraph
5 hours ago
- Telegraph
Google could steal the entire internet
Google has shown us what the end of the internet looks like. It calls it AI Mode. From Tuesday, instead of seeing ten blue links to third party websites when searching Google, users will see digests of information created by AI. Google says this 'lets you ask nuanced questions that would have previously required multiple searches.' Sometimes there is value in these digests – as demonstrated by AI startup Perplexity. However, the change has catastrophic economic consequences because of Google's dominant position over what we see on the web; AI mode removes the need to visit the site that created the original material. Google, it should be remembered, was found guilty of maintaining a monopoly by an American federal court last summer. An analytics study last week suggested that the top ranking site in blue link Google loses 79 per cent of its traffic after AI summaries are introduced. Other surveys suggest even more: as much as 96 per cent. This is not how the web was supposed to end. Sir Tim Berners Lee's original vision was of a rapid publishing technology, a two way conversation much like the telephone. When Google was young, it promised to get out of our way. 'We wanted people to spend a minimum amount of time on Google. The faster they got their results, the more they'd use it,' said founder Larry Page in 2004. But now Google has become like The Eagles' Hotel California – you can check in, but never leave. That's in keeping with an extractive industry which takes much from publishing but gives little back. AI makes this an order of magnitude worse. Generative AI breaks an informal social contract that has existed since the dawn of business: that a buyer should take a keen interest in the health of its suppliers. AI, though, is replacing suppliers entirely: an analogy is eating the seed corn. For having ingested everything from entire research libraries to newspapers, from YouTube to the works of every gallery, AI can create fine tuned pastiches and continue to produce them forever. Google can also punish sites that refuse to be scraped with a kind of corporate death sentence: making them disappear from Google. A former Facebook engineer, Georg Zoeller, who also advises Asian governments on AI, says generative AI is little more than piracy disguised by hype. 'Large language models are just storage, and all they are doing is compressing knowledge,' he says. 'The industry would have been murdered in its crypt if it had told the truth, and people realised that on the other side of the bot is a Napster'. The magic trick is how AI disguises the theft. Google says the old search results will still be available if you want – or can find them. Britain's Competition and Markets Authority has investigated the company's use of generative AI, but its remedies are so far very tentative, and it is soliciting views. The CMA also finds British business paying a very high toll to maintain Google's advertising dominance: UK publicly listed companies spend £10 billion with Google advertising, which the CMA suggests is far higher than it would be in a competitive digital ad market. The CMA can and should do much more to tame this predatory giant, so British internet businesses can survive.