
Study: ChatGPT Disrupts the Learning Process - Jordan News
Source: AFP
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Ammon
21 hours ago
- Ammon
Justice in the Age of Artificial Intelligence: Who Rules, Man or Machine? 1-2
In a landmark ruling issued by the U.S. District Court for the Southern District of Florida on May 2025, a real-world example error of AI hallucination. The story began when a visiting attorney from California, alongside a local Florida lawyer, submitted a legal brief citing a judicial decision from the Delaware Court. But the shock came when the court discovered that the cited ruling never existed. It was fabricated by an AI tool used for legal research. This alarm highlight an existential challenge that threatens the core principles of justice and accountability within the legal system. Which is the necessity for fairness, integrity, and infallibility in justice? Generative artificial intelligence becomes a formidable, disruptive force within courtrooms, law offices, and legislative bodies. Today, AI actively participates in drafting contracts, generating complex legal arguments, editing judicial decisions, and even producing preliminary legislative texts. This is not simply a technical breakthrough; it is a paradigm shift that redefines the very structure of our legal systems. Recent studies reveal that AI-generated legal texts now span a vast array of critical documents—from commercial contracts and legal pleadings to judicial rulings, legislative proposals, legal complaints, and even preliminary police reports. These are produced using advanced tools like ChatGPT, Westlaw, and CoCounsel, creating a new legal reality where algorithmic power converges with human intention. Law firms increasingly rely on AI to rapidly produce draft contracts, while courts use it to analyze case patterns and predict outcomes. Some legislatures have even begun accepting draft bills generated by AI tools, subject only to final human review. This dependency raising critical questions of responsibility, review, and consequences. Who is accountable for errors? Who verifies the content? Who bears the legal implications of a misstep? Amid this enthusiasm for technological progress, deeper challenges emerge—ones that extend far beyond technical concerns to strike at the heart of ethical and philosophical questions about justice itself. Foremost among these challenges are the dangers of hallucinations and bias. As clearly demonstrated by the Florida case, AI tools, despite their computational power, can generate fictitious citations and entirely false legal precedents. This is not a minor technical glitch—it undermines the foundation of legal fairness and equality before the law. Bias embedded in training data may skew legal analysis, raising profound philosophical concerns about how justice can be achieved when the algorithm's foundation is inherently flawed. A second looming threat is the phenomenon of legal floodgates. The ease with which AI can generate vast volumes of legal text may lead to an overwhelming influx of redundant or unnecessary documents. Courts may become buried under irrelevant data, straining judicial efficiency and potentially damaging public trust in the legal system. The justice process may become clogged with voluminous yet valueless content, diluting the importance of professional legal judgment and undermining procedural clarity. A third and equally troubling issue is that of authenticity and authorship. Here arises a fundamental question that strikes the moral fabric of the legal profession: Who truly authored a given legal text? Does a document reflect the considered intention of an attorney or the deliberation of a judge—or is it merely the product of an algorithm, devoid of human intent or ethical responsibility? This issue plunges us into the domain of moral philosophy and legal theory, where the 'original intent' behind a legal document is paramount. When human authorship is obscured, the chain of accountability becomes dangerously unclear, threatening the legal system's foundational principles. Legal institutions across the globe vary in how they approach these transformations, exposing a troubling regulatory gap. Some courts—particularly in the United States, as illustrated by the Florida decision—now explicitly prohibit the submission of AI-generated legal briefs or rulings unless thoroughly reviewed by a human. Meanwhile, other jurisdictions require only clear disclosure of AI usage and mandate human review prior to official submission. This divergence reveals the lack of a unified regulatory framework to govern such technologies. On the other side of the equation, tech companies have initiated voluntary self-regulation, embedding safeguards to limit AI output in sensitive legal contexts. While such efforts are commendable, they lack legal enforcement and are largely driven by internal ethics and market realities. This reveals the limitations of self-regulation and underscores the urgent need for external legislative intervention to foster long-term trust in legal institutions. Justice today is no longer solely written by the pens of lawyers and the verdicts of judges—it is increasingly authored in lines of code and AI-generated prompts. This transformation is not merely technical; it is deeply philosophical, changing how we understand law, its origins, and the scope of accountability. The question is no longer 'Should we use AI?' but rather 'How do we use it in a way that ensures justice, protects truth, and preserves the irreplaceable role of human conscience in legal decision-making?' law is not a static script; it is the living spirit of justice. It must not be distorted by algorithms nor misled by artificial minds. Professional integrity must remain indivisible and untouchable, no matter how advanced the tools we wield.


Jordan News
2 days ago
- Jordan News
Study: ChatGPT Disrupts the Learning Process - Jordan News
Study: ChatGPT Disrupts the Learning Process University students are increasingly turning to generative AI tools—even when asked to write about their personal experiences. A recent study has found that students who use this technology to write texts tend to exhibit lower levels of critical thinking. اضافة اعلان When Jocelyn Litzinger asked her students to write a personal story about discrimination, she noticed something peculiar: many stories featured a character named 'Sally.' 'Clearly, that's a common name in ChatGPT,' said the Chicago-based professor with a hint of disappointment. Litzinger, who teaches business management and social sciences at the University of Illinois, remarked, 'My students weren't even writing about their own lives.' She noted that about half of her 180 students used ChatGPT inappropriately during the last semester—including in assignments about ethical issues related to AI. Speaking to AFP, she said she wasn't surprised by a recent study suggesting that students who use generative AI to write tend to show less critical thinking. The preliminary study, which has yet to undergo peer review, went viral on social media and resonated with many educators facing similar issues with their students. Since its publication last month, over 3,000 teachers have reached out to the research team at the Massachusetts Institute of Technology (MIT), which conducted the study, according to lead researcher Natalia Kosmina. The Experiment In the study, 54 students from the Boston area were divided into three groups and asked to write short essays over 20 minutes. One group used ChatGPT, the second used a search engine, and the third relied solely on their own knowledge. Researchers monitored the students' brain activity over several months and had two teachers assess the essays. The texts written with ChatGPT were significantly worse than those written without AI assistance. Brain scans showed reduced communication between different brain regions among the ChatGPT users. Notably, more than 80% of students who used AI couldn't recall a single sentence from their essays, compared to only 10% in the other two groups. During the third session, many of these students seemed to rely heavily on copying. 'Soulless' Writing The educators grading the papers reported that they could easily identify the 'soulless' texts generated with AI. Although grammatically correct, these writings lacked creativity, personal depth, and critical insight. Kosmina cautioned against simplistic interpretations in the media claiming that AI is making people 'dumber' or lazier. In fact, during a fourth session, the group that had only used their own knowledge was asked to use ChatGPT for the first time—and surprisingly, showed higher neural activity, suggesting the tool could still stimulate thinking if introduced later in the learning process. Still, Kosmina emphasized the need for more rigorous studies to better understand how to use AI tools in ways that enhance—not replace—learning. Expert Criticism Ashley Juavinett, a neuroscientist at the University of California, San Diego (not involved in the study), criticized what she called 'overblown conclusions.' She told AFP, 'The paper doesn't offer conclusive evidence or the methodological rigor needed to determine how large language models like ChatGPT affect the brain.' Nevertheless, Litzinger said the findings reflect her own observations: since ChatGPT's release in 2022, spelling errors have dropped, but originality has declined. Much like calculators once forced teachers to rethink math instruction, AI now demands a reimagining of writing education. But Litzinger worries that students no longer need any foundational knowledge before using AI—thus skipping the most vital phase of learning. The issue extends far beyond classrooms. Scientific journals are struggling with the surge of AI-generated articles, and even publishing is affected: one startup reportedly plans to release 8,000 AI-written books per year. 'Writing is thinking, and thinking is writing,' said Litzinger. 'If we remove that process, what's left of the mind?' Source: AFP


Al Bawaba
3 days ago
- Al Bawaba
flydubai partners with emaratech to introduce smart biometric gates
as part of its ongoing commitment to innovation and investing in the latest technologies, flydubai has partnered with emaratech, a leading technology organisation in the UAE, to implement smart border control solutions for its pilots and cabin at the carrier's Airport Operations Centre, the new smart gates utilise advanced biometric technology, AI-driven verification and real-time data integration to streamline immigration processes. This provides a faster and more efficient immigration experience for its flight operations, supporting the carrier's commitment to operational efficiency, especially during busy travel Excellency Thani Alzaffin, Group Chief Executive Officer of emaratech, said: 'we are proud to partner with flydubai in pioneering a next-generation, paperless immigration experience for their crew members. Through the integration of AI-powered facial recognition technology, our smart gates seamlessly connect with both flydubai's and immigration's platforms, enabling real-time validation and a truly frictionless journey.''This initiative reflects emaratech's continued commitment to redefining border control processes—making them smarter, faster, and more secure. By harnessing the power of artificial intelligence, we are shaping a future where innovation drives convenience and trust at every checkpoint. We look forward to deepening our collaboration with flydubai across future initiatives that further enhance the travel experience for both passengers and crew,' added Hareb AlMheiri, Chief Procurement & Technology Officer at flydubai, said: 'we are pleased to have partnered with emaratech to implement this innovative solution for our pilots and cabin crew. We always look for opportunities to harness the latest technologies that support our growth and operational efficiencies and with the introduction of these biometric smart gates, this marks another step towards fostering a more seamless, punctual and secure operation as we future-proof our systems.'The carrier continues to invest in technologies that improve the daily experience of its frontline teams. Six smart biometric gates have been installed at the flydubai Airport Operations Centre where the crew report for their flights. Today, flydubai has created a growing network of more than 135 destinations served by a modern and efficient fleet of 89 aircraft. The carrier has also built a strong workforce of more than 6,400 employees, more than 1,300 of whom are pilots along with 2,500 cabin crew.