Latest news with #Monsanto


Time of India
3 days ago
- Science
- Time of India
AI will soon be able to audit all published research
Academy Empower your mind, elevate your skills Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science?In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit has opened the doors for exploitation. Opportunistic "paper mills" sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghost-written by Monsanto employees and published in a journal with ties to the tobacco after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This "science is broken" narrative undermines public recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI language processing tools flag "tortured phrases" - the tell-tale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or - especially agentic, reasoning-capable models increasingly proficient in mathematics and logic - will soon uncover more subtle example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it's much discussed that a good deal of published work is never or very rarely outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that public trust requires redefining the scientist's role in more transparent, realistic terms. Much of today's research is incremental, career‑sustaining work rooted in education, mentorship and public we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in can already anticipate what it will reveal. If the scientific community prepares for the findings - or better still, takes the lead - the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.


New Indian Express
3 days ago
- Science
- New Indian Express
AI will soon be able to audit all published research — what will that mean for public trust in science?
Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record. Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive. Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science? Peer review isn't catching everything In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing. This has opened the doors for exploitation. Opportunistic 'paper mills' sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees. Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products. These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures. Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws. Not all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and policy. In a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghostwritten by Monsanto employees and published in a journal with ties to the tobacco industry. Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide. When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This 'science is broken' narrative undermines public trust.


Mint
4 days ago
- Science
- Mint
AI will soon be able to audit all published research – what will that mean for public trust in science?
Wellington and Naomi Oreskes, Harvard University Wellington/Cambridge, Jul 26 (The Conversation) Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record. Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive. Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science? Peer review isn't catching everything In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing. This has opened the doors for exploitation. Opportunistic 'paper mills' sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees. Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products. These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise. Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures. Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws. Not all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and policy. In a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghostwritten by Monsanto employees and published in a journal with ties to the tobacco industry. Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide. When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This 'science is broken' narrative undermines public trust. AI is already helping police the literature Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation. Natural language processing tools flag 'tortured phrases' – the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction. AI – especially agentic, reasoning-capable models increasingly proficient in mathematics and logic – will soon uncover more subtle flaws. For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text. Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors. We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it's much discussed that a good deal of published work is never or very rarely cited. To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments. What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable. As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end. Reframing the scientific ideal Safeguarding public trust requires redefining the scientist's role in more transparent, realistic terms. Much of today's research is incremental, career‑sustaining work rooted in education, mentorship and public engagement. If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless. A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs. A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science. Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings – or better still, takes the lead – the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself. Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken. (The Conversation) NSA


Boston Globe
6 days ago
- Politics
- Boston Globe
EPA proposes allowing use of Dicamba weed killer on some crops
In a statement Wednesday the EPA said, 'these new products would give farmers an additional tool to help manage crops and increase yields in order to provide a healthy and affordable food supply for our country.' Agriculture groups applauded the decision. Advertisement Dicamba became one of the most widely used herbicides on the market after agribusiness companies such as Monsanto released genetically engineered seeds that could tolerate it in 2016. The idea was that farmers could spray their fields with dicamba and weeds would wilt while the crops would survive. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up Dicamba-tolerant seeds were developed in response to growing weed tolerance to another widely used herbicide, glyphosate, the active ingredient in Roundup. Starting in the 1990s, Monsanto marketed genetically engineered 'Roundup Ready' crop seeds alongside the popular herbicide Roundup. This line of corn, cotton and soy seeds was bred to resist glyphosate, and by 2011 more than 90% of soybeans grown in the U.S. were genetically engineered. The EPA's decision drew an immediate rebuke from the Center for Biological Diversity, an environmental advocacy group that has sued over the use of dicamba. In a statement, Nathan Donley, the group's environmental health science director said, 'this is what happens when pesticide oversight is controlled by industry lobbyists.' Advertisement Last month, Kyle Kunkler, a former soybean industry lobbyist who has been a vocal proponent of dicamba, joined the EPA's Office of Chemical Safety and Pollution Prevention as its deputy assistant administrator. This article originally appeared in


New York Times
6 days ago
- Politics
- New York Times
E.P.A. Proposes Allowing Use of Dicamba Weedkiller on Some Crops
The Environmental Protection Agency has proposed allowing the use of three products containing a controversial herbicide on genetically engineered cotton and soybeans. Last year, a federal court made certain uses of dicamba illegal after farmers complained that it had a tendency to drift into neighboring fields, damaging their crops. The ban was scheduled to take full effect this year. The E.P.A., which reviewed dicamba's uses and found it poses no risk to human health, is now accepting public comment on its proposed decision. It will then decide whether to greenlight the products. In a statement on Wednesday the E.P.A. said, 'these new products would give farmers an additional tool to help manage crops and increase yields in order to provide a healthy and affordable food supply for our country." Agriculture groups applauded the decision. Dicamba became one of the most widely-used herbicides on the market after agribusiness companies like Monsanto released genetically engineered seeds that could tolerate it in 2016. The idea was that farmers could spray their fields with dicamba and weeds would wilt while the crops would survive. Dicamba-tolerant seeds were developed in response to growing weed tolerance to another widely used herbicide, glyphosate, the active ingredient in Roundup. Starting in the 1990s, Monsanto marketed genetically engineered 'Roundup Ready' crop seeds alongside the popular herbicide Roundup. This line of corn, cotton and soy seeds was bred to resist glyphosate, and by 2011 more than 90 percent of soybeans grown in the U.S. were genetically engineered. The E.P.A.'s decision drew an immediate rebuke from the Center for Biological Diversity, an environmental advocacy group that has sued over the use of dicamba. In a statement, Nathan Donley, the group's environmental health science director said, 'this is what happens when pesticide oversight is controlled by industry lobbyists.' Last month, Kyle Kunkler, a former soybean industry lobbyist who has been a vocal proponent of dicamba, joined the E.P.A.'s Office of Chemical Safety and Pollution Prevention as its deputy assistant administrator.