
How Academics Are Pushing Back On The For-Profit Academic Publishing Industry
According to the independent news organization the Conversation, five publishing houses control about half the global academic publishing industry's market share. Relx, the parent company of the 'biggest player in this business,' Elsevier, reaped a profit margin of almost 40 percent in 2023, 'rivalling tech giants such as Microsoft and Google,' pointed out the March 2025 article.
'Many of the most trusted and prestigious research journals are owned by commercial publishers,' the Conversation noted. 'For example, the Lancet is owned by Elsevier.'
In 2024, the editorial board for the paleoanthropology bulletin Journal of Human Evolution (JHE) collectively resigned. Besides deficient copyediting and unethical use of AI, which resulted in what the journal Science calls 'scientifically significant errors,' the board accused its publisher, Elsevier, of overcharging.
High article processing charges (APCs) are common in the for-profit academic publishing industry. The 2021 paper 'Equitable Open Access Publishing: Changing the Financial Power Dynamics in Academia' notes that high APCs 'exacerbate disparities between funded and unfunded researchers.'
'Traditional academic publishers exploit scholars in several ways,' says Denis Bourguet, co-founder of Peer Community In (PCI), a nonprofit platform that offers 'peer review, recommendation, and publication of scientific articles in open access (OA) for free,' according to its website. Bourguet says common practices within the traditional academic publishing model commodify scholarly knowledge, treating it not as a public good but as a resource to extract profit.
'Researchers produce articles, conduct peer reviews, and often serve as editors, typically without pay, while publishers profit by charging high fees to both authors and readers. With this model, authors must pay substantial article processing charges to publish in open access. Yet, in some journals, since some articles remain behind paywalls, universities and libraries must pay subscriptions to give their members free access to the full content of these journals,' adds Bourguet.
PCI co-founder Thomas Guillemaud notes that costly paywalls make 'access difficult for researchers without institutional support, especially in low-income regions.' He adds that the 'pay-to-read or pay-to-publish model encourages researchers to focus on publishing in prestigious journals for career advancement, sometimes at the expense of research quality. This 'prestige economy' can distort scientific priorities and integrity. Pressures to publish in prestigious journals contribute to issues like irreproducible results, publication bias, and even scientific misconduct.'
According to a 2025 report in the Proceedings of the National Academy of Sciences, despite major advances such as the antiretroviral therapy and vaccines during the pandemic, science 'faces challenges due to the incentive systems,' with for-profit publishers trying to 'capitalize on unpaid reviewers and [charging] high fees for sharing and accessing knowledge.'
PCI is one of many academic-led initiatives challenging the dominance of for-profit publishers and, as Guillemaud puts it, 'reshaping scholarly communication.'
Lifecycle Journal, for instance, does not charge its authors or readers. It 'is a new transparent model of scholarly communication that aims to put publishing and evaluation in the control of the scholarly community itself,' its website states.
Similarly, SciPost, 'the home of genuine open publishing,' claims, 'We don't charge authors, we don't charge readers, we don't send bills to anybody for our services, and we certainly don't make any profit; we are an academic community service surviving on support from organizations that benefit from our activities. Said otherwise, our system is academia's antidote to APCs.'
The Free Journal Network curates and promotes Diamond OA journals that charge neither authors nor readers, ensuring adherence to fair open access principles and supporting a growing ecosystem of scholar-led publications.
The French nonprofit publishing platform Centre Mersenne 'endeavors to fight research output's privatization and outrageous profit-making out of the scientific commons,' according to its site. Its 'agenda is to support Diamond Open Access or Gold OA without APC (no fees required to read nor to publish).'
Diamond and Gold are two of many OA publishing models. Journals that use the Diamond Open Access model do not charge fees for readers or authors. Funding comes from academic institutions, research funders, philanthropists, governments, advertisers, and nonprofit organizations. Meanwhile, the Medical College of Wisconsin describes the Green OA model as 'the practice of placing a version of an author's manuscript into a repository, making it freely accessible for everyone… No article processing charges are paid.' The Georgia State University Library also outlines various types of OA models.
Besides adopting the OA model, academics are countering for-profit academic journals by publishing academic-led journals, putting pressure on publishers to lower their fees, renegotiating contracts, and forming consortia.
PCI embraces the Diamond OA model. Its support officer, Barbara Class, explains that its Peer Community Journal is free for authors and readers. This 'removes financial barriers imposed by article processing charges or subscription fees common in for-profit publishing. In addition, PCI publishes peer reviews and editorial decisions openly, promoting transparency and accountability in contrast to the often-opaque evaluation processes performed by for-profit journals.' Class adds, 'PCI focuses on the intrinsic value and quality of research rather than journal-based metrics.'
Guillemaud says PCI is sustained through a 'community-driven funding model based primarily on small, recurring public subsidies from universities, libraries, and research institutions. These institutions contribute annually on a pay-what-you-can basis… allowing broad participation regardless of size or budget. This stable and diversified funding base enables PCI to cover its operational costs without large private donors or charging fees to authors or readers.'
Author Bio: Damon Orion is a writer, journalist, musician, artist, and teacher in Santa Cruz, California. His work has appeared in Revolver, Guitar World, Spirituality + Health, Classic Rock, and other publications. Read more of his work at DamonOrion.com.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
5 days ago
- Techday NZ
Elsevier unveils Embase AI to transform biomedical data research
Elsevier has launched Embase AI, a generative artificial intelligence tool aimed at changing how researchers and medical professionals access and analyse biomedical data. The tool has been developed in collaboration with the scientific community and is built upon Elsevier's Embase platform, a widely used biomedical literature database. According to feedback from beta users, Embase AI can reduce the time spent on reviewing biomedical data by as much as 50%. Natural language features Among its central features, Embase AI allows users to conduct searches in natural language, ranging from basic to complex scientific queries. The system then provides instant summaries of the relevant data and research insights. Each answer comes with a list of linked citations to assist users in evaluating the evidence and meeting expectations around medical regulation. Unlike some other AI solutions that may obscure data provenance, Embase AI delivers transparency by presenting citations and ensuring that the underlying sources can be cross-checked. The database underpinning Embase AI is updated continuously and includes records such as adverse drug reaction reports, peer-reviewed journal articles, and around 500,000 clinical trial listings from This makes it suitable for a range of professional needs, including medical research, pharmacovigilance, regulatory submissions and the generation of market insights. Expanded access By enabling natural language querying, Embase AI seeks to open up biomedical data analysis to a broader group of users, including those who may lack advanced technical experience with literature reviews. Information is summarised for swift consumption while retaining the supporting references, limiting the likelihood that important findings go overlooked. The AI solution uses a dual-stage ranking system to generate summary responses with inline citations. This approach is designed to ensure transparency and help users trust the results. A human-curated hierarchy of medical concepts and their synonyms underpins the system, contributing to the precision and transparency of its outputs. Embase AI's records are updated daily, and its architecture allows the tool to function in real time, searching the platform's full content including peer-reviewed research, clinical trials, preprints and conference abstracts. Security and privacy Elsevier has stated that Embase AI was developed in accordance with its Responsible AI Principles and Privacy Principles to ensure robust data privacy and security. The company notes that the model's use of third-party large language models (LLMs) is private, with no user information being stored or employed to train public versions of these models. All data is retained solely within Elsevier's protected environment. "Embase AI is changing the way researchers and other users go about solving problems and helps them save valuable time searching for answers, digesting information, and avoiding the risk of missing valuable insights. Every user should have access to trusted research tools that help them advance human progress, and we remain committed to working in partnership with scientists across academia, life sciences and other innovative industries to ensure that our solutions address their needs. We know that our users seek solutions that they can trust, and we built Embase AI in a way that ensures transparency, explainability and accuracy." This statement was made by Mirit Eldor, Managing Director, Life Sciences at Elsevier. Ongoing development Embase AI is the latest addition to Elsevier's suite of products aimed at supporting the biomedical research community by facilitating discovery, analysis, and evidence synthesis using responsibly developed AI tools underpinned by trusted content. The platform is designed to meet the needs of professionals in roles such as research and development, medical affairs, academic research, knowledge management, and medical education.


Scoop
6 days ago
- Scoop
How Academics Are Pushing Back On The For-Profit Academic Publishing Industry
According to the independent news organization the Conversation, five publishing houses control about half the global academic publishing industry's market share. Relx, the parent company of the 'biggest player in this business,' Elsevier, reaped a profit margin of almost 40 percent in 2023, 'rivalling tech giants such as Microsoft and Google,' pointed out the March 2025 article. 'Many of the most trusted and prestigious research journals are owned by commercial publishers,' the Conversation noted. 'For example, the Lancet is owned by Elsevier.' In 2024, the editorial board for the paleoanthropology bulletin Journal of Human Evolution (JHE) collectively resigned. Besides deficient copyediting and unethical use of AI, which resulted in what the journal Science calls 'scientifically significant errors,' the board accused its publisher, Elsevier, of overcharging. High article processing charges (APCs) are common in the for-profit academic publishing industry. The 2021 paper 'Equitable Open Access Publishing: Changing the Financial Power Dynamics in Academia' notes that high APCs 'exacerbate disparities between funded and unfunded researchers.' 'Traditional academic publishers exploit scholars in several ways,' says Denis Bourguet, co-founder of Peer Community In (PCI), a nonprofit platform that offers 'peer review, recommendation, and publication of scientific articles in open access (OA) for free,' according to its website. Bourguet says common practices within the traditional academic publishing model commodify scholarly knowledge, treating it not as a public good but as a resource to extract profit. 'Researchers produce articles, conduct peer reviews, and often serve as editors, typically without pay, while publishers profit by charging high fees to both authors and readers. With this model, authors must pay substantial article processing charges to publish in open access. Yet, in some journals, since some articles remain behind paywalls, universities and libraries must pay subscriptions to give their members free access to the full content of these journals,' adds Bourguet. PCI co-founder Thomas Guillemaud notes that costly paywalls make 'access difficult for researchers without institutional support, especially in low-income regions.' He adds that the 'pay-to-read or pay-to-publish model encourages researchers to focus on publishing in prestigious journals for career advancement, sometimes at the expense of research quality. This 'prestige economy' can distort scientific priorities and integrity. Pressures to publish in prestigious journals contribute to issues like irreproducible results, publication bias, and even scientific misconduct.' According to a 2025 report in the Proceedings of the National Academy of Sciences, despite major advances such as the antiretroviral therapy and vaccines during the pandemic, science 'faces challenges due to the incentive systems,' with for-profit publishers trying to 'capitalize on unpaid reviewers and [charging] high fees for sharing and accessing knowledge.' PCI is one of many academic-led initiatives challenging the dominance of for-profit publishers and, as Guillemaud puts it, 'reshaping scholarly communication.' Lifecycle Journal, for instance, does not charge its authors or readers. It 'is a new transparent model of scholarly communication that aims to put publishing and evaluation in the control of the scholarly community itself,' its website states. Similarly, SciPost, 'the home of genuine open publishing,' claims, 'We don't charge authors, we don't charge readers, we don't send bills to anybody for our services, and we certainly don't make any profit; we are an academic community service surviving on support from organizations that benefit from our activities. Said otherwise, our system is academia's antidote to APCs.' The Free Journal Network curates and promotes Diamond OA journals that charge neither authors nor readers, ensuring adherence to fair open access principles and supporting a growing ecosystem of scholar-led publications. The French nonprofit publishing platform Centre Mersenne 'endeavors to fight research output's privatization and outrageous profit-making out of the scientific commons,' according to its site. Its 'agenda is to support Diamond Open Access or Gold OA without APC (no fees required to read nor to publish).' Diamond and Gold are two of many OA publishing models. Journals that use the Diamond Open Access model do not charge fees for readers or authors. Funding comes from academic institutions, research funders, philanthropists, governments, advertisers, and nonprofit organizations. Meanwhile, the Medical College of Wisconsin describes the Green OA model as 'the practice of placing a version of an author's manuscript into a repository, making it freely accessible for everyone… No article processing charges are paid.' The Georgia State University Library also outlines various types of OA models. Besides adopting the OA model, academics are countering for-profit academic journals by publishing academic-led journals, putting pressure on publishers to lower their fees, renegotiating contracts, and forming consortia. PCI embraces the Diamond OA model. Its support officer, Barbara Class, explains that its Peer Community Journal is free for authors and readers. This 'removes financial barriers imposed by article processing charges or subscription fees common in for-profit publishing. In addition, PCI publishes peer reviews and editorial decisions openly, promoting transparency and accountability in contrast to the often-opaque evaluation processes performed by for-profit journals.' Class adds, 'PCI focuses on the intrinsic value and quality of research rather than journal-based metrics.' Guillemaud says PCI is sustained through a 'community-driven funding model based primarily on small, recurring public subsidies from universities, libraries, and research institutions. These institutions contribute annually on a pay-what-you-can basis… allowing broad participation regardless of size or budget. This stable and diversified funding base enables PCI to cover its operational costs without large private donors or charging fees to authors or readers.' Author Bio: Damon Orion is a writer, journalist, musician, artist, and teacher in Santa Cruz, California. His work has appeared in Revolver, Guitar World, Spirituality + Health, Classic Rock, and other publications. Read more of his work at


Techday NZ
17-06-2025
- Techday NZ
AI Agents - The struggle to balance automation, oversight, & security
Visa's recent announcement to harness agentic artificial intelligence (AI) for automatically transacting payments on behalf of customers has attracted widespread interest and scrutiny within the technology and security communities. The move, reported by Associated Press, signals a step change in how everyday purchases could be managed in the near future, promising to reduce both friction and manual intervention in digital commerce. James Sherlow, Systems Engineering Director EMEA at Cequence Security, observes that Visa is "betting on AI agents to remove the friction and mundanity of regular purchases by using the technology to hunt for, select and pay for goods and services automatically." He notes that, amid the current climate of multi-level authentication processes, such innovation may prove both groundbreaking and beneficial in deterring fraudsters. However, Sherlow highlights significant hurdles regarding consumer acceptance: "The question remains whether the user will be comfortable giving AI that level of autonomy." Sherlow elaborates on the technical aspects, explaining that Visa intends its AI agents to initially recommend purchases based on learned patterns and preferences, before moving towards more autonomous decision-making. Security remains paramount, with verification to be managed by Visa in a manner analogous to ApplePay, yet now underpinned by AI agents and with Visa handling disputes. He cautions that using AI agents with sensitive personally identifiable information (PII) and payment card industry (PCI) data "could have far reaching ramifications." Clear visibility, accountability, and robust guard rails must be built from the outset, stressing the evolving role of API security, especially as API endpoints become critical to both ecommerce and AI utilisation. Echoing these concerns, information security practitioners point to the risks inherent in delegating decision-making to semi-autonomous systems. Joshua Walsh, Information Security Practitioner at rradar, believes agentic AI offers dramatic gains in productivity and efficiency by automating complex tasks. Still, "this same autonomy also brings serious security and governance risks that must be addressed before deployment to the live environment," he states. Because AI agents operate across multiple platforms and often without direct human oversight, vulnerabilities such as prompt injection or misconfiguration carry disproportionately high risks, potentially leading to compromised data or even regulatory breaches. Walsh underscores accountability as a core issue: "When an agent makes a bad call or acts in a way that could be seen as malicious, who takes responsibility?" He advocates for human-in-the-loop safeguards for high-risk actions, strict role-based access controls, rigorous audit logging, and continuous monitoring—especially where sensitive data is involved. Walsh argues that deploying such capabilities safely requires a foundation of transparency and meticulous, sustained testing before production rollout. Within the broader debate on agentic AI, there is also scepticism about overestimating its capabilities. Roberto Hortal, Chief Product and Technology Officer at Wall Street English, warns that "the promise of AI agents is tempting," but urges caution: "Agents aren't a silver bullet. They're only effective when built with clear goals and deployed with human oversight." Hortal points out that unsupervised use often results in "AI slop," an abundance of low-value output that increases rather than decreases human workload. He draws a parallel to onboarding untested staff, stating, "You wouldn't let a brand-new intern rewrite your strategy or email your customers unsupervised. AI agents should be treated the same." Hortal emphasises the value of keeping AI tightly scoped and always supportive, not substitutive, of human decision-making. Gartner's latest research indicates that so-called "guardian agents" will account for up to 15% of the agentic AI market by 2030, reflecting the heightened importance of trust and security as AI agents proliferate. Guardian agents, according to Gartner, are designed for "trustworthy and secure interactions," acting both as assistants for content review and autonomous overseers capable of redirecting or blocking AI actions to ensure alignment with predefined objectives. In a recent webinar, 24% of CIOs and IT leaders reported already deploying multiple AI agents, while the majority are either experimenting or planning imminent adoption. As agentic AI gains traction across internal administrative and customer-facing tasks, risks including data poisoning, credential hijacking, and agent deviation have come to the fore. Avivah Litan, VP Distinguished Analyst at Gartner, comments, "Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails." With the rapid evolution toward complex, multi-agent systems, traditional human oversight is becoming impractical, further accelerating the need for automated, intelligent checks and balances. Gartner recommends organisations categorise guardian agents into three primary types: reviewers (verifying AI-generated content), monitors (tracking agentic actions for follow-up), and protectors (automatically intervening to adjust or block actions as needed). Integration of these roles is expected to become a central pillar of future AI systems, with Gartner predicting that 70% of AI applications will utilise multi-agent approaches by 2028. The debate on agentic AI thus hinges on balancing automation, oversight, and security at unprecedented scale. Visa and other firms setting the pace in this new domain will need to combine technological innovation with careful risk management to achieve both user adoption and operational resilience.