logo
#

Latest news with #GPTZero

Turning off AI detection software is the right call for SA universities
Turning off AI detection software is the right call for SA universities

Daily Maverick

time4 days ago

  • Daily Maverick

Turning off AI detection software is the right call for SA universities

Universities across South Africa are abandoning problematic artificial intelligence detection tools that have created a climate of suspicion. The recently announced University of Cape Town decision to disable Turnitin's AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good. The problems with Turnitin's AI detector extend far beyond technical glitches. The software's notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty. Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment. A system built on flawed logic As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin's AI detector doesn't identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text. The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as 'delve into' or 'crucial' – a writing preference that has nothing to do with artificial intelligence. This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated. One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own. The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context. The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong. The absurdity exposed The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations. Even more telling, both systems flagged the professor's own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers. This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor's own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions? Gaming the system While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning. A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin's terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone. The contrast with Turnitin's similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of. The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system. Undermining educational relationships Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers. Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements. This violates basic principles of fairness and due process that should govern any academic integrity system. A path forward UCT's decision to disable Turnitin's AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience. This doesn't mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology. Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn't be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises. The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose. UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around. As more institutions grapple with AI's impact on higher education, South Africa's approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms. In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty. The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM Sioux McKenna is professor of higher education studies at Rhodes University. Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.

The Data Behind AI Humanizer Tools: What 47,000 Content Tests Revealed About Detection Rates
The Data Behind AI Humanizer Tools: What 47,000 Content Tests Revealed About Detection Rates

Time Business News

time04-07-2025

  • Time Business News

The Data Behind AI Humanizer Tools: What 47,000 Content Tests Revealed About Detection Rates

Last week, I ran an experiment that made me question everything I thought I knew about AI-generated content. After analyzing 47,000 pieces of content across 12 different AI detectors, I discovered that 73% of human-written text was being flagged as AI-generated. That's right – actual humans failing the Turing test. Here's the thing: as someone who's spent years building attribution models and analyzing user behavior, I've learned that the best insights often come from questioning our assumptions. So I decided to dig deeper into the world of AI humanizer tools, treating them like any other marketing technology – with data, skepticism, and a healthy dose of statistical rigor. The AI content detection landscape looks a lot like the early days of spam filters – everyone's playing catch-up, and the rules keep changing. Based on my analysis of market data and user behavior patterns, here's what's actually happening: • Detection accuracy varies wildly: Top detectors show false positive rates between 15-73% (yes, you read that correctly) • Context matters more than keywords: Academic content gets flagged 2.3x more often than casual blog posts • Newer models are getting sneakier: GPT-4 content passes detection 42% more often than GPT-3.5 • Human writing patterns are evolving: We're unconsciously adapting our writing to avoid AI-like patterns • The arms race is accelerating: Detection algorithms update weekly, humanizer tools follow within days Think of it like this: if AI detectors were breathalyzers, they'd be flagging people who just used mouthwash. The data visualization I created shows detection rates looking like a volatile stock chart – peaks and valleys with no clear trend line. After testing various approaches with a sample size of 5,000 documents (because anything less would make my statistics professor cry), I've mapped out the main strategies: Strategy Best For Pros Cons ROI Potential Syntax Shuffling Quick blog posts Fast processing, maintains meaning Can create awkward phrasing Medium (65% pass rate) Contextual Rewriting Academic/professional content Natural flow, high pass rates Slower, may alter technical accuracy High (89% pass rate) Hybrid Human-AI Long-form content Best of both worlds Requires human time investment Very High (94% pass rate) Pattern Breaking SEO content Preserves keywords, beats most detectors Sometimes sacrifices readability Medium-High (78% pass rate) Here's what actually works, based on real testing data (not just vendor promises): 1. Layer your approach – Using multiple humanization techniques increases pass rates by 34%. It's like diversifying your investment portfolio. 2. Test with multiple detectors – What passes Turnitin might fail GPTZero. I've seen 67% variance between platforms. • Always test with at least 3 different detectors • Prioritize the detectors your audience actually uses • Keep a testing log – patterns emerge after ~50 tests 3. Preserve your voice – The best ai humanizer tools maintain authorial voice while tweaking detection triggers. 4. Watch your metrics – Humanized content that passes detection but tanks engagement is worthless. Track both. 5. Understand the math – Most detectors use perplexity and burstiness scores. Aim for perplexity >50 and burstiness >0.8. 6. Don't over-optimize – Content that's too perfectly 'human' can paradoxically trigger detectors. It's like wearing a tuxedo to a beach party. You can't improve what you don't measure. Here are the KPIs that actually matter: Detection Pass Rate: Should be >85% across major platforms. I've seen ranges from 45-95% depending on the tool and content type. Readability Score: Flesch Reading Ease should stay within 5 points of the original. Anything more means you're sacrificing clarity. Engagement Metrics: Humanized content should maintain 90%+ of original engagement rates. If readers bounce, you've failed regardless of detection scores. Processing Time: Aim for <30 seconds per 1,000 words. Some tools take 5+ minutes – that's not scalable. When evaluating ai humanizer tools, I apply the same framework I used for attribution modeling at Airbnb: does it solve the real problem without creating new ones? Focus on batch processing capabilities and API integrations. You're looking at volume, so efficiency matters more than perfection. Set up A/B tests comparing humanized vs. original content performance. Prioritize accuracy preservation over detection avoidance. Use tools that maintain citations and technical terminology. Consider hybrid approaches where AI assists but doesn't dominate. Keyword preservation is non-negotiable. Test how humanization affects your target keywords' prominence. I've seen cases where humanization improved rankings by reducing 'over-optimization' penalties. Look for tools that enhance rather than homogenize. The goal isn't to sound generically human – it's to sound like *you*. Track voice consistency metrics alongside detection rates. After diving deep into the data, here's my biggest takeaway: we're solving for the wrong problem. Instead of asking 'how can we make AI content undetectable?', we should ask 'how can we make AI content genuinely valuable?' The most successful content strategies I've analyzed don't rely on fooling detectors – they use AI as a force multiplier for human creativity. The future isn't about AI vs. human content; it's about finding the optimal blend. What's your take? Are you measuring the actual impact of humanized content on your business metrics, or just celebrating when it passes detection? TIME BUSINESS NEWS

Balancing AI Benefits And Academic Integrity
Balancing AI Benefits And Academic Integrity

Barnama

time26-06-2025

  • Barnama

Balancing AI Benefits And Academic Integrity

I n the rapidly advancing era of artificial intelligence (AI), tools like ChatGPT are reshaping the landscape of higher education, bringing profound changes to institutions of higher learning (IPTs) nationwide. ChatGPT offers substantial benefits as a learning tool, such as generating essays, enhancing writing creativity, analysing data, accelerating research processes, and providing instant answers to complex questions. However, this convenience also raises concerns—particularly over misuse by students who rely on the software to complete assignments automatically, without true comprehension or critical engagement. Academic dishonesty is becoming more complex, as conventional plagiarism tools struggle to detect AI-generated content. Even more concerning is the growing reliance on AI, which blurs the line between genuine student effort and machine-assisted work—raising important ethical and pedagogical questions. THE CHANGING LANDSCAPE OF ACADEMIC DISHONESTY According to Associate Professor Dr Mohd Khairie Ahmad, Dean of the School of Multimedia Technology and Communication at Universiti Utara Malaysia, the philosophy of technology is to simplify and enhance capabilities—and when it comes to the issue of AI in learning, it depends on context. 'Generative AI is a technological advancement capable of producing content that previously required human thought and effort. AI can certainly generate student assignments or coursework. 'If students rely entirely on AI, it could potentially hinder their learning process. This irresponsible or unethical use of AI to complete assignments—while claiming them as original work—is referred to as 'AIgiarism' or AI plagiarism,' he told Bernama. Sharing that digital plagiarism or academic dishonesty is not a new phenomenon, Mohd Khairie said AI's development has made academic misconduct more dynamic. He noted that since generative AI gained popularity around 2022, the higher education world has become aware of and anticipated the challenges it brings. 'It is undeniable that the use of AI in learning—especially for assignment completion—has become common over the past year or two. There are students who rely entirely on AI to complete assignments or even answer tests or quizzes, especially when conducted online. 'Many students believe such actions are not wrong since AI is legal and not a prohibited technology. However, this is considered unethical because the work does not stem from the student's own cognitive effort or thinking. In fact, such conduct is regarded as a form of plagiarism. 'Typically, lecturers evaluate student assignments by measuring the similarity index, and now also through AI detection. Among the AI applications that can detect AI plagiarism are Turnitin, GPTZero, Winston AI, Copyleaks AI Detector, and he said, adding that evaluating the style, language structure, and content of assignments also helps detect breaches of academic integrity. While not denying that educators, particularly lecturers, also use AI for teaching and research purposes, he said there can be no compromise when it comes to violating the principles of academic integrity. According to him, the world of higher education upholds the practice of respecting and valuing past scholarly works. 'A scholarly work requires reading and digesting prior writings as part of the process of generating new thoughts or ideas. This is a defining feature of academic writing and a core principle of scholarly work—to acknowledge references used, at the very least by listing them in citations. 'In the context of AI being a productive tool that supports scholarly work, it is therefore ethical to clearly disclose its use and to list the AI sources used to obtain information, ideas, and so on,' he said. ESTABLISHING GUIDELINES Responding to whether IPTs have clear guidelines on AI usage by students and lecturers, Mohd Khairie said to his knowledge, the Malaysian Qualifications Agency (MQA) was among the earliest to issue brief guidance through an Advisory Note in 2023 on the use of generative AI across all Malaysian institutions. He added that in 2024, Universiti Teknologi Malaysia (UTM) published more specific guidelines for educators and students on the application of generative AI. These guidelines focus on lawful, responsible, transparent, trustworthy, and ethical use of AI, grounded in values, regulations, and legislation. 'Since AI has become a foundational and routine part of the teaching and learning process, all IPTs should have clearer and more specific guidelines for generative AI. Furthermore, these guidelines should eventually align with the AI Act currently being drafted by the National Artificial Intelligence Office (NAIO), under the Ministry of Digital,' he said. Describing the best approach as educating students to use AI ethically and responsibly—as a learning aid rather than a shortcut to complete assignments—he said the importance of awareness education, especially since AI is poised to become an essential tool for enhancing learning efficiency and productivity. 'AI should be understood not as the end product but as a process that supports students' cognitive (thinking) activities. If this understanding doesn't take root, it's not impossible that digital 'illnesses' like brainrot (mental fatigue) may affect university students. 'AI is an unavoidable phenomenon and, at the same time, a current necessity. Its exposure and practice as a learning support tool should be promoted as a value and part of the academic culture. 'A study by leading international publisher Wiley found that in 2024, AI contributed to a 72 per cent increase in academic dishonesty compared to 2021 in the United States and Canada. However, responsible and ethical AI guidance by educators has been shown to potentially reduce academic misconduct among students,' he said. AI AS PART OF THE ECOSYSTEM Meanwhile, the Malaysian Cyber Consumers Association (MCCA) views the increasing use of AI—particularly ChatGPT—among students in IPTs as a clear sign that higher education is undergoing a profound technological transformation. Its president, Siraj Jalil, said that AI is no longer a tool of the future but has already become an integral part of the current ecosystem in IPTs. 'MCCA does not see this issue as entirely a threat, nor as an opportunity without risks. It lies in a grey area that can bring either benefits or harm depending on how it is used. 'If a student uses AI to enhance their understanding of a subject, generate ideas, or organise their thoughts, it can lead to progress. However, if it is used entirely without the involvement of reasoning, critical thinking, and a sense of responsibility, then it clearly challenges academic integrity. 'Therefore, MCCA believes this is the time for IPTs to re-evaluate their approaches to assessment and learning—not to reject AI from educational methods, but to develop a framework that allows AI to be used ethically and effectively,' he explained. He noted that the concerns of some lecturers regarding this issue should also be taken seriously. MCCA has received a great deal of direct feedback from lecturers reporting a sharp increase in students submitting assignments almost entirely generated by AI. 'This not only disrupts the academic assessment process but also raises uncertainty in terms of academic aesthetics and values. The solution to this issue isn't merely to impose restrictions or punishments, but to create a more responsible academic ecosystem—one that focuses on ethics and perhaps even redefines academic benchmarks beyond AI usage. 'Every IPT should develop clear AI usage guidelines and integrate AI literacy and academic ethics modules into student orientation and professional development for lecturers. Assignments should also be restructured to emphasise process rather than just outcomes, such as through presentations, reflective portfolios, or fieldwork,' he added, noting that ethical use is shaped not by fear, but through understanding and clear guidance. At the same time, Siraj suggested that lecturers be given training on the use of AI in research and academic writing, including the importance of disclosing AI usage openly in methodology or references to safeguard academic integrity. 'Academic publications—especially journals and conference proceedings—should begin adapting their policies on AI-generated content. What matters most is striking a balance between innovation and integrity. This is to address concerns that some research content could be produced without critical review or clear AI usage disclosure,' he said. Siraj also believes that the Ministry of Higher Education (MOHE), in collaboration with NAIO, could formulate a national policy or official guidelines on AI usage in IPTs. He proposed that such a policy include several key components: permitted levels of AI usage, types of assignments appropriate for AI support, forms of misuse that warrant action, and AI literacy and ethics requirements for all campus communities. 'This policy should be developed inclusively, with engagement from academic experts, students, technology practitioners, and industry stakeholders to ensure it is responsive and practical. 'Responsible use of AI begins with the fundamental principle that AI is a tool—not a replacement for human reasoning. For students, responsibility begins with the awareness that learning is a process of self-development and understanding one's field, not just completing tasks for grades. 'Using AI to understand concepts or review writing structure is acceptable. But copying or generating an entire assignment without comprehension goes against the spirit and discipline of education,' he said, adding that both students and lecturers must understand the risks and threats of AI misuse, including the possibility of false information, biased algorithms, and unverified content dissemination. AWARENESS AND HIGH LITERACY Sharing his views, Muhammad Haziq Sabri, President of the Student Representative Council at Universiti Teknologi MARA Shah Alam for the 2024/2025 session, said ChatGPT has now become a common tool among university students and has helped him significantly in completing assignments and preparing notes for exams. 'It enables note generation from lecture slides and helps in understanding certain topics. Using ChatGPT to correct grammar and sentence structure also speeds up the process of completing assignments,' he said. Rejecting the notion that the use of AI—particularly ChatGPT—is a form of academic cheating, he said it should be seen as a modern learning support tool that must be used responsibly. 'It becomes academic dishonesty when students just 'copy and paste' without understanding or modifying the content generated by ChatGPT. Almost all my friends also use ChatGPT, but not excessively—they focus on things like assignment structure and grammar checking. 'So far, I have not heard of any students facing disciplinary action for AI misuse. Most students use ChatGPT responsibly because they understand that misuse could violate the university's academic ethics policies,' he said. Muhammad Haziq noted that according to Academic Circular No. 5 of 2023, official guidelines on the use of ChatGPT in teaching and learning have been issued, adding that lecturers are encouraged to guide students on using ChatGPT ethically as a learning tool. He said the circular also stresses the importance of ensuring that AI is used to foster critical thinking, understanding, and values—not merely for copying answers—as outlined in Article 6. 'This shows that the university not only allows the use of AI but encourages its responsible use and provides guidelines,' said the Bachelor of Public Relations student from the Faculty of Communication and Media Studies. For Muhammad Asyraf Daniyal Abdul Halid, 24, a Master's research student in Marine Biotechnology at Universiti Malaysia Terengganu, ChatGPT serves as a guide, but over 90 per cent of the work comes from the student's own effort in sourcing credible information with proper citations. 'ChatGPT really helps us search and compile necessary information, develop ideas, and get an overview of the assignments or projects given by lecturers. However, plagiarism and failure to fact-check information are common forms of misuse among students,' he added, noting that not all students in higher learning institutions have a high level of awareness and literacy when using such software.

Detector de IA Understanding the Technology Behind Identifying AI-Generated Content
Detector de IA Understanding the Technology Behind Identifying AI-Generated Content

Time Business News

time17-06-2025

  • Science
  • Time Business News

Detector de IA Understanding the Technology Behind Identifying AI-Generated Content

To address these challenges, Detector de IA has been developed—specialized tools designed to determine if content was created by a human or generated by artificial intelligence. This article explores how AI detectors work, their applications, limitations, and the future of this important technology. An Detector de IA is a tool or algorithm developed to examine digital content and assess whether it was produced by a human or generated by an artificial intelligence system. These detectors are capable of analyzing text, images, audio, and video to detect patterns commonly associated with AI-generated content. AI detectors are being widely adopted across multiple sectors such as education, journalism, academic research, and social media content moderation. As AI-generated content continues to grow in both volume and complexity, the need for accurate and dependable detection methods has increased dramatically. Detector de IA rely on a combination of computational techniques and linguistic analysis to assess the likelihood that content was generated by an AI. Here are some of the most common methods: Perplexity measures the predictability of a text, indicating how likely a sequence of words is based on language patterns. AI-generated text tends to be more predictable and coherent than human writing, often lacking the spontaneity and errors of natural human language. Lower perplexity scores typically suggest a greater chance that the text was generated by an AI system. AI writing often exhibits specific stylistic patterns, such as overly formal language, repetitive phrasing, or perfectly structured grammar. Detectors look for these patterns to determine authorship. Certain detectors rely on supervised learning models that have been trained on extensive datasets containing both human- and AI-generated content. These models learn the subtle distinctions between the two and can assign a probability score indicating whether a given text was AI-generated. Newer methods include embedding hidden watermarks into AI-generated content, which can be identified by compatible detection tools. In some cases, detectors also analyze file metadata for clues about how and when content was created. Several platforms and tools have emerged to help users detect AI-generated content. Some of the most well-known include: GPTZero : One of the first widely adopted detectors designed to identify content generated by large language models like ChatGPT. : One of the first widely adopted detectors designed to identify content generated by large language models like ChatGPT. : Popular in academic and publishing settings, this tool offers plagiarism and AI content detection in a single platform. : Popular in academic and publishing settings, this tool offers plagiarism and AI content detection in a single platform. Turnitin AI Detection : A go-to tool for universities, integrated into the Turnitin plagiarism-checking suite. : A go-to tool for universities, integrated into the Turnitin plagiarism-checking suite. Copyleaks AI Content Detector : A versatile tool offering real-time detection with detailed reports and language support. : A versatile tool offering real-time detection with detailed reports and language support. OpenAI Text Classifier (now retired): Initially released to help users differentiate between human and AI text, it laid the groundwork for many newer detectors. With students increasingly using AI tools to generate essays and homework, educational institutions have turned to AI detectors to uphold academic integrity. Teachers and universities use these tools to ensure that assignments are genuinely authored by students. AI-written news articles, blog posts, and press releases have become common. AI detectors help journalists verify the originality of their sources and combat misinformation. Writers, publishers, and editors use AI detector to ensure authenticity in published work and to maintain brand voice consistency, especially when hiring freelancers or accepting guest submissions. Social media platforms use AI detection tools to identify and block bot-generated content or fake news. This improves content quality and user trust. Organizations are increasingly required to meet ethical and legal responsibilities by disclosing their use of AI. Detection tools help verify content origin for regulatory compliance and transparency. Despite their usefulness, AI detectors are far from perfect. They face several notable challenges: Detectors may mistakenly classify human-written content as AI-generated (false positive) or vice versa (false negative). This can have serious consequences, especially in academic or legal settings. As generative models like GPT-4, Claude, and Gemini become more advanced, their output increasingly resembles human language, making detection significantly harder. The majority of AI detectors are predominantly trained on English-language content. Their accuracy drops when analyzing content in other languages or domain-specific writing (e.g., legal or medical documents). Users can easily modify AI-generated content to bypass detection. A few manual edits or paraphrasing can make it undetectable to most tools. As AI detectors become more prevalent, ethical questions arise: Should users always be informed that their content is being scanned for AI authorship? Can a student or professional be penalized solely based on a probabilistic tool? How do we protect freedom of expression while maintaining authenticity? There is an ongoing debate about striking the right balance between technological regulation and user rights. Looking forward, AI detectors are expected to become more accurate, nuanced, and embedded into digital ecosystems. Some future developments may include: Built-in AI Signatures : AI models could embed invisible watermarks into all generated content, making detection straightforward. : AI models could embed invisible watermarks into all generated content, making detection straightforward. AI-vs-AI Competition : Detection tools may be powered by rival AI systems trained to expose the weaknesses of generative models. : Detection tools may be powered by rival AI systems trained to expose the weaknesses of generative models. Legislation and Standards : Governments and industry bodies may enforce standards requiring disclosure when AI is used, supported by detection audits. : Governments and industry bodies may enforce standards requiring disclosure when AI is used, supported by detection audits. Multi-modal Detection: Future detectors will analyze not only text but also images, videos, and audio to determine AI involvement across all content types. Detector de IA have become vital tools in a world where artificial intelligence can mimic human creativity with striking accuracy. They help preserve trust in digital content by verifying authenticity across education, journalism, and communication. However, as generative AI evolves, so too must detection tools—becoming smarter, fairer, and more transparent. In the coming years, the effectiveness of AI detectors will play a critical role in how societies manage the integration of AI technologies. Ensuring that content remains trustworthy in the age of artificial intelligence will depend not only on technological advancement but also on ethical application and regulatory oversight. TIME BUSINESS NEWS

Fact Check: Don't fall for photos of Pope Leo XIV tumbling down stairs
Fact Check: Don't fall for photos of Pope Leo XIV tumbling down stairs

Yahoo

time11-06-2025

  • Yahoo

Fact Check: Don't fall for photos of Pope Leo XIV tumbling down stairs

Claim: In June 2025, a series of photographs authentically showed Pope Leo XIV falling down stairs. Rating: In 2025, a set of photographs allegedly depicting Pope Leo XIV falling down stairs circulated online. For example, one Facebook post (archived) by the account Daily Bible Verse shared three images, one of the pope waving to the crowd as he walked down stairs and two of him falling down stairs: The same photos appeared several times on Facebook (archived) and Threads (archived). However, the story was fictional. A Google search (archived) and a Google News search (archived) revealed no reputable news outlet reported this incident. Of the three images, one showing the pope waving was most likely authentic. The photo started circulating online on May 21, 2025, after the pope's first weekly general audience. Similar photos from that event appeared on the same day in the same setting from reputable news agencies such as Getty Images, NurPhoto and The Associated Press, and artificial intelligence detectors indicated it was not AI-generated. But there were visual clues that the two smaller images showing the pope falling were unlikely to be real. For example, Leo's face in them was blurry and elongated. His position as he fell also appeared to change from image to image — falling backward in the first image and then falling forward in the second — in a way that seemed physically implausible. Snopes ran the images through two different artificial intelligence image detectors, Decopy and Undetectable, both of which determined the images of the pope falling were AI-generated. The pinned comment on the Daily Bible Verse post linked to a website with an article that appeared to have little to do with the photographs. It read: According to multiple eyewitnesses, a piece of ceremonial technology—possibly a small microphone transmitter or liturgical device—detached unexpectedly from Pope Leo's vestment and fell near the altar. The moment was brief, almost imperceptible to many in the crowd, but cameras caught it. Within minutes, social media platforms exploded with theories, commentary, and metaphor-laden interpretations. Snopes ran the text of the article through two AI text detectors, Quillbot and GPTZero, both of which concluded it was AI-generated — a clue that the website in question was a junk content farm filled with so-called "AI slop." Snopes often fact-checks fake and altered images of well-known people; see, for example, our story on an edited image of tech billionaire Elon Musk's chest and a fact check debunking an image of United Healthcare CEO shooting suspect Luigi Mangione wearing a "Sailor Moon" costume. Ibrahim, Nur. "Fake Photo Shows Luigi Mangione in 'Sailor Moon' Costume." Snopes, 16 Dec. 2024, Accessed 10 June 2025. Liles, Jordan. "Photo of Elon Musk Altered to Increase His Chest and Stomach Size." Snopes, 11 Nov. 2024, Accessed 10 June 2025.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store