logo
Is Using ChatGPT to Write Your Essay Bad for Your Brain?

Is Using ChatGPT to Write Your Essay Bad for Your Brain?

Time​ Magazine16 hours ago

TIME reporter Andrew Chow discussed the findings of a new study about how ChatGPT affects critical thinking with Nataliya Kosymyna. Kosymyna was part of a team of researchers at MIT's Media Lab who set out to determine whether ChatGPT and large language models (LLMs) are eroding critical thinking, and the study returned some concerning results. The study divided 54 subjects into three groups, and asked them to write several essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity. What they found was that of the three groups, the ChatGPT users had the lowest brain engagement and consistently underperformed at neural, linguistic and behavioral levels. Over the course of several months, the ChatGPT users got lazier with each subsequent essay, often resorting to copy and paste.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Quantum, Moore's Law, And AI's Future
Quantum, Moore's Law, And AI's Future

Forbes

timean hour ago

  • Forbes

Quantum, Moore's Law, And AI's Future

microchip integrated on motherboard In the game of AI acceleration, there are several key moving parts. One of them is hardware: what do the chips look like? And this is a very interesting question. Another is quantum computing: what role will it play? Another is scaling. Everyone from CEOs and investors to engineers is scrambling to figure out what the future looks like, but we got a few ideas from a recent panel at Imagination in Action that assembled some of the best minds on the matter. WSE and the Dinner Plate of Reasoning Not too long ago, I wrote about the Cerebras WSE chip, a mammoth piece of silicon about the size of a dinner plate, that is allowing the centralization of large language model efforts. This is an impressive piece of hardware by any standard, and has a role in coalescing the vanguard of what we are doing with AI hardware. In the aforementioned panel discussion, Julie Choi from Cerebras started by showing off the company's WSE superchip, noting that some call it the 'caviar of inference.' (I thought that was funny.) 'I think that as we evolve, we're just going to see even more innovative, novel approaches at the hardware architecture level,' she said. 'The optimization space is extremely large,' said Dinesh Maheshwari, discussing architecture and compute units. 'So I encourage everyone to look at it.' Panelist Caleb Sirak, also of MIT, talked about ownership of hardware. 'As the models themselves start to change, how can businesses themselves integrate them directly and get them for a fair price, but also convert that AI, and the energy involved, into a productive utility?' 'What is a computer, and what can a computer do?' asked Alexander Keesling, explaining his company's work on hardware. 'We took the fundamental unit of matter, a single atom, and turned it into the fundamental unit of information, which is a quantum bit … a quantum computer is the first time in human history where we can take advantage of the fundamental properties of nature to do something that is different and more powerful.' Jeremy Kepner of MIT's Lincoln Lab had some thoughts on the singularity of computing – not the race toward AGI, but a myopic centralization of an overarching 'operation.' 'Every single computer in the high end that we built for the last many decades has only done one operation,' he said. 'So there's a lot to unpack there, but it's for very deep mathematical and physics reasons: that's the only operation we've ever been able to figure out how to accelerate over many decades. And so what I often tell the users is, the computer picks the application. AI happens to be acceleratable by that operation.' He urged the audience to move forward in a particular way. 'Think about whatever you want to do, and if you can accelerate it with that kind of mathematical operation, you know the sky is the limit on what you can do,' he said. 'And someone in your field will figure it out, and they will move ahead dramatically.' Engineering Challenges and AI Opportunities The panel also mentioned some of the headwinds that innovators must contend with. On the other hand, Jeff Grover noted the near-term ability of systems to evolve. 'We're actually quite excited about this,' he said. The Software End Panelists discussed the relevance of software and the directions that coding is going in. 'Programming languages are built for people,' Sirak said. 'How do you actually change that to build languages and tools that AI can use?' Choi mentioned benchmarks like inference rates of 2900 tokens per second for Llama 4. 'Open source models are rich for developers,' she said. 'What that's doing is building a bridge between the bravest developers. I would say the early adopters tend to be very courageous, and they're willing to code on things that they've never seen before.' The Fast Car Several panelists talked about a particular metaphor to a Ferrari, with Choi referencing 'Ferrari-level' speeds for the Cerebras chip. Maheshwari talked about 'exotic' chips, and design from an architecture paradigm, comparing certain builds to 'picking up groceries in a Ferrari.' He also mentioned the imperative of keeping the technology 'street legal.' Moore's Law and Progress Kepner talked about being surprised by what computers can do, and the size of investment in the industry. Moore's law, he said, implied an upper limit for spending. He predicted another decade of efficiencies, and cited the Ozaki scheme, a matrix method for preserving precision in calculations. What About Quantum? 'I think that the first area where we're going to see quantum computing impact is going to be in research,' Keesling said. 'These problems, at their core, are (about) trying to answer what happens when atoms and electrons interact with one another and develop these emergent behaviors … how we think about chemistry, how we think about drug interactions, how we think about material properties, all comes from electrons and atoms moving.' There was a lot to unpack in this panel discussion, including details on how we're going to achieve progress in the next few years. The Ozaki Scheme Going back to this matrix idea, I was not familiar with this term, so I looked it up and asked ChatGPT to describe it in basic English. 'It's named after Makoto Ozaki, the person who came up with the idea,' the model told me. 'He found a smart way to do very accurate math (like multiplying big grids of numbers) using fast but less accurate tools (like low-precision numbers). His method splits the work into small, simple steps and then carefully puts the pieces back together to get the exact right answer.' Going further, ChatGPT, just to be nice, even gave me a medieval storyline to show how the Ozaki scheme works, and to contrast it to other alternatives. I'm just going to print that here, because it's interesting. The Tale of the Kingdom of Matrixland In the kingdom of Matrixland, the royal court has a big job: multiplying giant tables of numbers (called matrices). But the royal calculator is slow when it uses fancy, high-precision numbers. So the King holds a contest: 'Who can multiply big matrices both quickly and accurately?' Sir Ozaki's Clever Trick Sir Ozaki, a wise mathematician, enters the contest. He says: 'I'll break each matrix into small, easy pieces that the royal calculator can handle quickly. Then I'll multiply those simple parts and put them back together perfectly.' The crowd gasps! His method is fast and still gives the exact right answer. The King declares it the Ozaki Scheme. The Other Contestants But other knights have tricks too: Lady Refina (Iterative Refinement) She does the quick math first, then checks her work. If it's off, she fixes it — again and again — until it's just right. She's very accurate, but takes more time. Sir Compenso (Compensated Summation) He notices small errors that get dropped during math and catches them before they vanish. He's good at adding accurately, but can't handle full matrix multiplication like Ozaki. Lady Mixie (Mixed Precision) She charges in with super speed, using tiny fast numbers (like FP8 or FP16). Her answers aren't perfect, but they're 'good enough' for training the kingdom's magical beasts (AI models). Baron TensorFloat (TF32) He uses a special number format invented by the kingdom's engineers. Faster than full precision, but not as sharp as Ozaki. A favorite of the castle's GPU-powered wizard lab. The Ending Sir Ozaki's method is the most exact while still using fast tools. Others are faster or simpler, but not always perfect. The King declares: 'All of these knights are useful, depending on the task. But if you want both speed and the exact answer, follow Sir Ozaki's path!' Anyway, you have a range of ideas here about quantum computing, information precision, and acceleration in the years to come. Let me know what you think about what all of these experts have said about the future of AI.

I turned off all AI features on my Pixel phone — and instantly regretted it
I turned off all AI features on my Pixel phone — and instantly regretted it

Android Authority

timean hour ago

  • Android Authority

I turned off all AI features on my Pixel phone — and instantly regretted it

Robert Triggs / Android Authority I had this realization — epiphany of sorts — that while we've become more conscious of generative AI tools like ChatGPT and Gemini, we often use AI much more than we actively perceive. Every app you touch on your phone has some kind of smarts and automation baked in. It's constantly learning from your patterns and improving in the background. That nudged me to experiment with becoming more intentional about these AI additions and disable them for a cleaner look and feel. No smart suggestions I mindlessly use, no Assistant to speak to, and no on-device smarts. All turned off. I enthusiastically planned to do this for a week, but I soon realized I was being too optimistic. What sounded like a solid digital detox plan turned into a quiet reckoning: my phone is a well-oiled system with subtle automations I don't think I can live without anymore. How smart do you like your smartphone to be? 0 votes I like my phone as basic as possible NaN % I like to balance — smart where needed NaN % Give me all the AI, everywhere NaN % This is the most digitally impaired I've felt Andy Walker / Android Authority I imagined turning off smart features across all my main apps would feel like going back to the good-old Nokia bar phone days. Nostalgia made that seem enticing — something I thought I'd actually want — but practically, it was far from rosy. The most frustrated I got during my time off AI was with Gboard. Without swipe typing, predictive text, and autocorrect — the very features we all love to meme about — my entire phone felt broken. The number and variety of misspellings I could come up with made me question my self-worth as a writer. And fixing each one of them made me a painfully slow typist. Group chats would often move on from a topic by the time I'd finished typing my take — total Internet Explorer–style late blooming. In Google Photos, edits became much more manual. While I enjoy playing with contrast and tone and whatnot myself, I really missed the one-tap fixes that helped with lighting and gave me a quick, clean version to share on Instagram or at least build on. More importantly, I couldn't use any of the smart editing features you get a Pixel for — Magic Editor, Photo Unblur, Best Take. Without them, it was like going back to the cave days of modern tech (2010, I mean). Ryan Haines / Android Authority Oh, and I had to completely disable Gemini/Google Assistant. I honestly felt like Joaquin Phoenix in Her, sorely missing his AI companion. I couldn't ask it to control smart home devices or help with Android Auto — everything became manual. I had to now type out my reminders, and changing music in the car turned into a dangerously distracting chore. That's when I noticed how often I absentmindedly said 'Ok Google' while walking around the house. I guess we've all been in the Her era all along without even realizing it. Quality Inferiority of life Andy Walker / Android Authority Beyond the big-ticket features I lost, I found myself stumbling without all the little ones, too. Without Pixel's Live Captions, I couldn't watch videos in noisy places and ended up saving them for later — not to consume more intentionally, but out of frustration. Gmail and Google Messages no longer suggested quick replies or helped finish my sentences. I had to type out full messages and emails like it was 2015. I noticed how often I absentmindedly said 'Ok Google' while walking around the house. I guess we've all been in the Her era all along without even noticing it. Maps stopped telling me when to leave home based on traffic, and it didn't remember my parking spot either. Once, I forgot where I'd parked because I didn't save the location manually. Google Photos stopped resurfacing old memories during the day — no surprise moments with friends, family, or random mountain dogs I clicked a decade ago. Not getting to see dog photos randomly is the lowest kind of inferiority in life. The good side of un-intelligification Ryan Whitwam / Android Authority Besides sparing me time to coin my own words, the lack of AI on my phone did help in a few ways. You must've already guessed the first one — battery life benefits. I couldn't track it rigorously since I had limited time with this setup, but the gains were in the 10–15% range, which was noticeably better than usual. More importantly, the phone just felt quieter. No unnecessary alerts, no screen lighting up every half hour with nudges I didn't need. It felt more analog — like a tool I controlled, not something that subconsciously controlled me. I picked it up when I needed to, not because I was tempted to see what was waiting for me. But was it enough to keep me on this routine? You already know the answer to this, too. I want all the AI magic back — right now Stephen Schenck / Android Authority That was me last weekend, soon after I started the experiment. The lack of AI smarts was annoying at first, then it got frustrating enough to slow down my regular day. Simple things took twice the time, especially without Gboard's assistive typing. And that's when it hit me that AI isn't just Gemini or the ChatGPT app. It's ambient. It works in the background, often silently, making tiny decisions and smoothing over rough edges without drawing attention to itself. Quiet enough to fade in the background — until you turn it all off. AI is ambient. It works in the background, often silently, making tiny decisions and smoothing over rough edges without drawing attention to itself. Hopefully, this little try-out gives you a good idea of why it's not worth trying for yourself. Convenience is the point of AI, and I'm all for it. Like I said, I lasted far fewer days than I'd planned. I remembered the exact sequence in which I turned everything off and flicked it all back on just as quickly. I want Photos to clean up distracting objects in my shots. I want the Assistant to find my playlist while I'm driving. And I absolutely cannot live without Gboard's smarts. So yes, I'm back to using my smart-phone the way it was meant to be — smartly.

Rubin Observatory's Stunning Result Proves It's a ‘Game Changer' for Spotting Dangerous Asteroids
Rubin Observatory's Stunning Result Proves It's a ‘Game Changer' for Spotting Dangerous Asteroids

Gizmodo

timean hour ago

  • Gizmodo

Rubin Observatory's Stunning Result Proves It's a ‘Game Changer' for Spotting Dangerous Asteroids

Astronomers usually keep their eyes on the sky, but on Monday, June 23, the community turned its attention toward Washington, D.C., as scientists from the Vera C. Rubin Observatory unveiled the telescope's first images. Many have waited more than 20 years to see Rubin in action, and its initial findings did not disappoint. Rubin, a joint initiative of the National Science Foundation (NSF) and the Department of Energy's (DOE) Office of Science, recently conducted its first 10 hours of test observations. In just that short period, the observatory produced dazzling images and discovered more than 2,000 previously unknown asteroids, including seven near-Earth asteroids. None of them pose a threat to our planet, but through this wealth of new data, the observatory has already proved to be a game changer for asteroid hunters working on planetary defense. By conducting unprecedentedly fast and detailed surveys of the entire southern sky, Rubin will allow scientists to find and track more space rocks than ever before. 'As this camera system was being designed, we all knew it was going to be breathtaking in what it delivered, but this has exceeded all our expectations,' Richard Binzel, a professor of planetary sciences at the Massachusetts Institute of Technology (MIT) and inventor of the Torino Scale—a tool for categorizing potential Earth impact events—told Gizmodo. Data on those 2,000 new asteroids went directly to the International Astronomical Union's Minor Planet Center (MPC), the globally recognized organization responsible for cataloging and disseminating data on asteroids, comets, and other small celestial bodies. It plays an essential role in the early detection and monitoring of asteroids that threaten Earth. The MPC has spent years preparing for the deluge of data from Rubin, ramping up its software to process massive amounts of observations. When the first round officially came flooding in on Monday, it was 'nerve-racking and exciting simultaneously,' Matthew Payne, MPC director, told Gizmodo. This was just a taste of what's to come. In a few months, Rubin will begin the Legacy Survey of Space and Time (LSST), a decade-long, near-continuous survey of the southern sky. This will produce an ultrawide, ultra-high-definition time-lapse record of the universe. In terms of asteroids, that means the MPC will receive about 250 million observations per year from LSST, according to Payne. 'For us, that's a game changer in the total amount of data that we're getting, because at the moment we get somewhere in the region of 50 to 60 million a year,' he said. Rubin's remarkable abilities stem from its remarkable instruments. Equipped with a unique three-mirror telescope design and the largest digital camera ever built, this observatory can conduct all-sky surveys while still detecting very faint objects like asteroids. This bridges a key gap between existing technologies, Payne explained. When hunting space rocks, 'you need to go as deep as possible,' Peter Veres, an MPC astrophysicist, told Gizmodo. 'That's what the LSST does, and none of the survey telescopes in the world that aim at planetary defense do that.' During this 10-year survey, Rubin will observe the cosmos on an automated schedule using its 27.6-foot (8.4-meter) Simonyi Survey telescope. Each 30-second exposure will cover an area about 45 times the size of the full Moon. Then, the enormous LSST camera will capture wide-field images and stitch them together to create a complete view of the southern sky every three nights. The combination of Rubin's huge field of view, short exposure time, and its ability to rapidly sweep the sky will yield an avalanche of asteroid discoveries, Veres explained. In 2005, Congress ordered NASA to build a near-Earth object (NEO) survey program to detect, track, catalogue, and characterize the physical characteristics of all near-Earth asteroids and comets at least 328 feet (100 meters) in diameter. If one of these objects struck our planet, it would cause mass destruction that would decimate life on a continental scale, Payne said. The goal was to find 90% of them by 2020, but current estimates show NASA has only found about 40%, he explained. LSST could help NASA pick up the pace. 'It's just going to start revolutionizing our understanding of this population of things,' Payne said. Binzel agrees. 'Those objects are out there, whether we see them or not,' he said. 'Now we're going to see them, and we'll be able to determine that most—if not all of them—are going to safely pass by the Earth in the coming decades. But the best news is if an object has our name on it already, we will be able to find it most likely many, many years—if not decades—before it would come toward Earth.' In theory, that would give NASA's Planetary Defense Coordination Office (PDOC) time to launch a mission to intercept the asteroid. PDOC is still developing this capability, but in 2022, it launched the Double Asteroid Redirection Test (DART) mission, which sent a spacecraft on a 10-month-long journey to collide with the asteroid moonlet Dimorphos. The collision successfully changed Dimorphos' orbital path, demonstrating NASA's ability to deflect a large asteroid away from Earth if given enough time. Given Rubin's clear potential to revolutionize planetary defense efforts—and the global attention it has received—one would expect NASA to be singing its praises. That has not been the case. The agency has kept strangely quiet about the observatory's launch—and in fact, it appears to be ignoring Rubin's first discoveries altogether. 'It's a warp drive version of finding asteroids,' Keith Cowing, an astrobiologist and former NASA employee who now serves as editor of NASA Watch, told Gizmodo. 'You'd think that the planetary defense people would be in the front row cheering it on, saying, 'send me the data!'' NASA did not share any public information about Monday's event and has not promoted the observatory's findings. When Gizmodo reached out for comment on Rubin's contributions to planetary science and defense, NASA declined and recommended reaching out to the observatory instead. On Tuesday, June 24, the agency's Office of the Inspector General published a report on the implementation and management of NASA's planetary defense strategy. The report only briefly mentions Rubin alongside NASA's forthcoming NEO Surveyor, a space telescope designed to find asteroids that could hit Earth. 'These new observatories are expected to find and track significantly more NEOs than current capabilities, which will likely mean a substantial increase in necessary follow-up observations,' the report states. NASA's PDCO and its planetary science program will undoubtedly use data gathered by the LSST, so what's with the cold shoulder? Cowing thinks it's a symptom of the agency's inner turmoil. 'They're jittery at NASA,' he said. 'Their budgets are being cut from all sides—they don't know what the final budget will be, but the White House wants to slash it—and they're having to react to this with whatever is at hand.' Indeed, President Donald Trump's 2026 budget proposal would cut NASA's science funding by a whopping 47%, potentially killing more than 40 missions, according to The Planetary Society. 'The only good news is what didn't get shot,' Cowing said. He suspects that most NASA employees—including planetary defense personnel—are in survival mode. 'What do you do when you simply don't know if you'll have a job, if the person next to you will have a job, or if you're gonna need to compete for the same job?' Cowing asked. 'That's what's at the heart of this. It's just this general malaise and fear, and people are simply not doing the routine, professional, collaborative, collegial work that they would do across agencies and countries.' As NASA science crumbles, it's unclear whether the agency will have the resources and personnel to take full advantage of Rubin's data. Though the PDCO currently leads the world's planetary defense efforts, that could soon change. Binzel, however, is optimistic. 'Great nations do great science,' he said. 'I continue to have faith that our nation will continue to do great science.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store