logo
#

Latest news with #LargeLanguageModel

Startup CEO says Google had everything ..., yet OpenAI beat them to the LLM Gold Rush, Elon Musk's 'one-word' reply
Startup CEO says Google had everything ..., yet OpenAI beat them to the LLM Gold Rush, Elon Musk's 'one-word' reply

Time of India

time5 days ago

  • Business
  • Time of India

Startup CEO says Google had everything ..., yet OpenAI beat them to the LLM Gold Rush, Elon Musk's 'one-word' reply

An online debate over AI race of Silicon Valley reignited recently. Tesla CEO Elon Musk also presented his point of view on the debate with a one-word response. Recently, a US federal judge ruled that Anthropic and AI company did not break any copyright laws while training AI model Claude using books. The judge pointed out that the use of books was a fair step and an AI model does not copy or reproduce books but it learns from them and then generate original content. Soon after the judgement, an online debate started on X (formerly known as Twitter). A startup CEO Luis Batalha has asserted that Google , despite possessing "everything" needed, was ultimately outmaneuvered by OpenAI in the burgeoning Large Language Model (LLM) "gold rush." 'Google had everything: the transformer, massive compute, access to data, even Google Books - yet OpenAI beat them to the LLM gold rush. Having the pieces isn't the same as playing the game,' wrote Batalha. Tesla CEO Elon Musk also supported the sentiment with a one-word reponse 'True'. Originally Google has been at the forefront of AI research. The company has published seminal papers and has also developed advanced models. However, with the launch of ChatGPT , OpenAI managed to capture imagination of millions and ignited the current "LLM Gold Rush," forcing other tech giants to accelerate their own public-facing generative AI initiatives. Critics suggest that Google's cautious approach, perhaps due to its established market position and the potential risks of deploying rapidly evolving AI, allowed a leaner, more focused entity like OpenAI to seize the early lead.

How is AI affecting New Zealand creatives?
How is AI affecting New Zealand creatives?

The Spinoff

time6 days ago

  • Science
  • The Spinoff

How is AI affecting New Zealand creatives?

Claire Mabey speaks to four local creatives who say they've experienced a decline in work linked to the rise of AI. A recent study on the 'cognitive cost of using a Large Language Model (LLM) ' found that the critical thinking skills of ChatGPT users may decline over time. 'While these systems reduce immediate cognitive load, they may simultaneously diminish critical thinking capabilities and lead to decreased engagement in deep analytical processes,' says the report. The study was prompted by 'the rapid proliferation of LLMs' across all aspects of our lives, including work, education and home. Despite the uncertain impact of AI on our cognitive abilities, ethical and environmental concerns, and its potential impact on people's employment prospects, many workplaces, including here in Aotearoa, are adopting AI solutions in place of human processes. The creative sector is one of the most impacted by AI, as LLMs and design-based tools offer quick and cheap alternatives to human craft and expertise. The Spinoff spoke to four Aotearoa creatives who have lost work to AI. Here are their stories. Freya Daly Sadgrove, creative writer and editor In 2024 Freya Daly Sadgrove got 'a dreamy job' marking weekly personal development reports from master of engineering students in an innovative course at an Australian university, work she describes as 'essentially marking people's diaries on the quality of their introspection'. Daly Sadgrove was deeply invested in the work and relished the privilege of reading such personal accounts. The most important part of the job, from Daly Sadgrove's perspective, was writing constructive feedback in response, engaging on a deep level with the students' personal revelations to foster their self-awareness and interpersonal skills. Daly Sadgrove says it was subtle work and required a high level of empathy. Most of the markers, including Daly Sadgrove, weren't involved in engineering at all and were hired instead for their ability to understand people and communicate with them. 'I loved it so much,' says Daly Sadgrove. 'I loved the insight I got into the minds of people with very different lives from me.' Daly Sadgrove found the job rewarding, too – she could see how her feedback was having a positive effect on the students week after week. After one semester in the job, Daly Sadgrove was offered it again for the next semester. But this time, she and the other markers were told they would no longer be writing the feedback, but instead would be 'lightly editing feedback generated by a Large Language Model (LLM)'. The changes to the job description were laid out in a document sent to all of the markers, which justified the use of LLMs by saying the AI tools 'remove all the boring parts of the marker job, by getting it to write all the routine 'framing' parts of the feedback … You can focus your efforts on the parts of the feedback students will actually read, giving them the most relevant and effective takeaways.' The university's document said the LLMs would be providing a summary of each section of the students' work and that 'the machine will give feedback that will be good at making the student feel heard. LLMs are not useful for helping students improve or delivering significant insights.' As a writer, Daly Sadgrove said she found it demeaning that the university framed the use of the LLM as a way to remove the 'boring writing parts'. She was confused by the assumption that students weren't reading the markers' feedback, given she had seen weekly evidence that they were. Daly Sadgrove wrote a response to the university to reject the reframed job. 'I am floored by the logic that spends any energy on designing a machine to do the job of making students feel heard,' she wrote. 'Do the students know it'll be a machine making them feel heard? Do you think that will make them feel heard? Or are we planning not to tell the students that our job as humans is specifically not to listen to them, but instead to listen largely to the machine that has processed their thoughts?' Daly Sadgrove, and several of her colleagues, declined the job offer. Jackie Lee Morrison, writer and editor In 2022 Jackie Lee Morrison joined a copywriting company as a project manager and lead editor. The company had positioned itself as a copywriting agency with real, skilled writers. When Lee Morrison joined, the work the company had was steady and growing. So much so that Lee Morrison was involved in recruiting new writers to expand the team in order to meet demand. In November 2022 Open AI's ChatGPT was released to the public and Lee Morrison saw an immediate drop-off in client work as people began to experiment with AI solutions. The release of ChatGPT coincided with the company taking on a major shareholder in the US, which was expected to bring in more US clients. The newly expanded team was ready and waiting for the increase in work, but it never came as the US clients were the first to start experimenting with AI. Given the company marketed itself as using real writers, the internal stance on AI was strict. Lee Morrison says they used several AI detection programmes to vet work. However, from January 2023 copywriting contracts dropped off to the point where there was no longer enough work to support her team, Lee Morrison says. Some writers were let go, others left on their own. Other writers, says Lee Morrison, 'were simply left hanging, waiting for potential work'. Towards the end of 2023, work had dwindled significantly. Lee Morrison says that her manager wanted to keep her on board as long as possible, but even his position within the company was precarious. At the end of the year, Lee Morrison made the decision to leave. 'I think things would've just dropped off,' she says. Ash Raymond James, writer and graphic designer Ash Raymond James has been a freelance writer and designer for more than a decade. He says that AI is having a 'severely negative' impact on his work, particularly in relation to book design and editing contracts as publishing companies and self-publishing clients turn to AI. Ash says clients are asking AI models to edit their work and are using AI tools to create design assets like logos and social media assets. Some clients, he says, have an expectation that his fees will decrease because they assume he will use widely available AI tools that are faster than his non-AI processes. He has seen major companies implement AI elements into their work instead of using human designers. Ash has received many responses from clients saying instead of using his services, they are going to use AI to create design assets themselves, because it is cheaper and faster. 'It is impossible to compete financially,' he says. 'I am being hired significantly less as AI becomes more normalised. From my point of view as a full-time creator, AI is crippling industries and stealing opportunities.' Hera Wynn, designer and animator When Hera Wynn studied media design in 2013, the general mood was that computers were the future and learning computer-based tools was the way forward. 'But it's gone too far the other way,' she now says. After Wynn had a child she found it hard to focus on coding (websites, games and apps) so in 2022 she pivoted to illustration and animation work. 'But I did that just as AI became more open source and now I feel redundant and like I'm running out of time. I'm lost,' she says. 'I feel like hospitality and retail jobs might be the last to survive.'

Relying on AI could be weakening the way we think, researchers warn
Relying on AI could be weakening the way we think, researchers warn

Sinar Daily

time20-06-2025

  • Science
  • Sinar Daily

Relying on AI could be weakening the way we think, researchers warn

ARTIFICIAL intelligence is progressively transforming how we write, research, and communicate in this new age of technological renaissance. But according to MIT's latest study, this digital shortcut might come at a steep price: our brainpower. A new study by researchers at the Massachusetts Institute of Technology (MIT) has raised red flags over the long-term cognitive effects of using AI chatbots like ChatGPT, suggesting that outsourcing our thinking to machines may be dulling our minds, reducing critical thinking, and increasing our 'cognitive debt.' Researchers at MIT found that participants who used ChatGPT to write essays exhibited significantly lower brain activity, weaker memory recall, and poorer performance in critical thinking tasks than those who completed the same assignments using only their own thoughts or traditional search engines. 'Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,' the research paper elaborated. While AI tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Photo: Canva The MIT Study The conducted study in question involved 54 participants, who were divided into three groups: one used ChatGPT, another relied on search engines, and the last used only their brainpower to write four essays. Using electroencephalogram (EEG) scans, the researchers measured brain activity during and after the writing tasks. The results were stark. 'EEG revealed significant differences in brain connectivity. Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM (Large Language Model) users displayed the weakest connectivity,' the researchers reported. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Researchers described this as 'offloading human thinking and planning,' indicating that the brain was doing less work because it was leaning on the AI. Interestingly, when later asked to quote or discuss the content of their essays without AI help, 83 per cent of the chatbot users failed to provide a single correct quote, compared to just 10 per cent among the search engine and brain-only groups. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. Photo: Canva In context to the study, this would likely suggest they either didn't engage deeply with the content or simply didn't remember it. 'Frequent AI tool users often bypass deeper engagement with material, leading to 'skill atrophy' in tasks like brainstorming and problem-solving,' lead researcher Dr Nataliya Kosmyna warned. The chatbot-written essays were also found to be homogenous, with repetitive themes and language, suggesting that while AI might produce polished results, it lacks diversity of thought and originality. Are our minds getting lazy? The MIT findings echo earlier warnings about the dangers of 'cognitive offloading' — a term used when people rely on external tools to think for them. An earlier February 2025 study by Microsoft and Carnegie Mellon University found that workers who heavily relied on AI tools reported lower levels of critical thinking and reduced confidence in their own reasoning abilities. The researchers warned that overuse of AI could cause our 'cognitive muscles to atrophy' — essentially, if we don't use our brains, we lose them. This particular trend is steadily increasing concerns of having serious consequences for education and workforce development. Moving forward, the MIT team cautioned that relying too much on AI could diminish creativity, increase vulnerability to manipulation, and weaken long-term memory and language skills. As seen in the study, those who used AI chatbots displayed reduced 'theta' brainwaves, which are associated with learning and memory formation. Photo: Canva The dawn of a new era? With AI chatbots becoming increasingly common in classrooms and homework help, educators are facing a difficult balancing act. While these said tools can and have supported learning, overreliance on artificial intelligence risks undermining the very skills schools aim to develop. Teachers have been voicing concerns that students are using AI to cheat or shortcut their assignments. The aforementioned MIT study provides hard evidence that such practices don't just break rules — they may actually hinder intellectual development. As such, the primary takeaway is not that AI is inherently bad — but that how we use it matters greatly. The study thus reinforces the importance of engaging actively with information, rather than blindly outsourcing thinking to machines. As the researchers put it: 'AI-assisted tools should be integrated carefully, ensuring that human cognition remains at the centre of learning and decision-making.'

Is anything real anymore? AI testimonials take over the American justice system
Is anything real anymore? AI testimonials take over the American justice system

Time of India

time19-06-2025

  • Time of India

Is anything real anymore? AI testimonials take over the American justice system

Generative AI has been developing at a breakneck pace since the high-profile release of ChatGPT in November 2022. The Large Language Model (LLM) garnered massive media recognition for its ability to write complex and coherent responses to simple prompts. Other AI LLMs such as and Microsoft's 'Sydney' (now the AI copilot) also gained media notoriety for the manner in which they seemed to mimic human emotions to an uncanny degree. Written text is not the only area where AI has a disruptive effect, with image generation algorithms such as Midjourney, and video generation programs such as Google Veo progressively blurring the line between what's made by humans, and what's made by AI. Google Veo, in particular, became infamous for generating short videos resembling viral social media posts that had netizens wondering how convincing they looked. These rapid developments in AI technology have led to increased concerns about their disruptive impact on everyday life, and this has now begun to happen in the courtrooms of the United States. AI testimonies are now a part of the US court system AI video is now being introduced as a kind of posthumous testimony in court trials. During a manslaughter sentencing hearing where the victim was an American male named Christopher Pelkey, shot and killed in a road rage incident, an AI video of Perkley played where he gave an impact statement. The video had the AI say 'To Gabriel Horcasidas, the man who shot me, it is a shame we encountered each other that day, under those circumstances…I believe in forgiveness, and a God who forgives and I always have. I still do.' Pelkey's sister, Stacy Wales, had given her own testimony during the sentencing hearing, but didn't feel that her own words alone could properly convey the extent of her grief. Christopher Pelkey was killed in a road rage incident in Chandler in 2021, but last month, artificial intelligence brought him back to life during his killer's sentencing hearing. At the end of the hearing, Gabriel Horcasidas was sentenced to 10.5 years in prison. The ruling has since been appealed, shining a spotlight on the disruptive impact AI tech is already having on America's court system. Speaking to the Associated Press, AI deepfake expert David Evan Harris said that the technology might end up stacking the deck in favour of the wealthy and privileged: 'I imagine that will be a contested form of evidence, in part because it could be something that advantages parties that have more resources over parties that don't,' In one of the viral Google Veo videos that took the internet by storm, an AI generated girl says: 'This is wild. I'm AI generated by Veo 3. Nothing is real anymore.' We are Veo 3 just broke the internet.10 wild examples 1. Nothing is real anymore With the increasing normalization of AI technology in everyday life, as well as vital civic avenues such as criminal justice, the impacts of such technologies are sure to be dissected and studied for years to come.

AmTrust Wins Celent Model Insurer Award for Digital Customer Experience
AmTrust Wins Celent Model Insurer Award for Digital Customer Experience

Business Wire

time18-06-2025

  • Business
  • Business Wire

AmTrust Wins Celent Model Insurer Award for Digital Customer Experience

NEW YORK--(BUSINESS WIRE)--AmTrust Financial Services, Inc. ('AmTrust' or the 'Company') has been recognized by Celent as a winner of a Model Insurer award for Digital Customer Experience. Celent is a global research and advisory firm for the financial services industry. The Company won the award for its groundbreaking platform: AmTrust Genius, an AI-powered quoting solution that's redefining how brokers and agents generate insurance quotes. Built into the AmTrust Online broker portal, AmTrust Genius uses Large Language Model (LLM) technology to instantly extract key risk details from different data sources including quotes, proposals, or existing policies. What once required tedious manual keying is now automated – saving time, reducing errors and accelerating deal flow. The platform also suggests cross-selling opportunities tailored to small business' needs. Each quote is benchmarked and enhanced with GenAI-driven, recommendations – giving brokers a clear, personalized rationale for every coverage and limit. 'AmTrust's initiative is a prime example of transformative efficiency in the insurance sector," said Nathan Golia, Senior Analyst at Celent. 'Considering the time savings plus the potential business value of getting these competitive quotes done more quickly and easily, it's nothing short of a slam-dunk to leverage this technology for such a drastic quality-of-life improvement.' 'We sincerely appreciate this award from Celent which recognizes our differentiated approach to providing competitive quotes,' said Ariel Gorelik, AmTrust's Global Chief Operating Officer. 'We lead with a spirit of continuous innovation and are always working on new technologies to improve the user experience. With AmTrust Genius, we are providing our agents and brokers a quick and easy-to-use submission system for Workers' Compensation, Businessowners, and Cyber insurance policies.' Celent's annual Model Insurer Awards recognize the best practices of technology usage in different areas critical to success in insurance. Nominations are submitted by insurance carriers and undergo a rigorous evaluation process by Celent analysts. Celent judges submissions on three core criteria: demonstrable business benefits of live initiatives; the degree of innovation relative to the industry; and the technology or implementation excellence. Celent's annual award program recognizes insurance carriers as 'model insurers' for their outstanding technology initiatives. In order to win, the initiatives must demonstrate clear business benefits, innovation, and technology or implementation excellence. About AmTrust Financial Services, Inc. AmTrust Financial Services, Inc., a multinational insurance holding company headquartered in New York, offers specialty property and casualty insurance products, including workers' compensation, business owner's policy (BOP), general liability and extended service and warranty coverage. For more information about AmTrust, visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store