
Learn More by Listening: Why Audioread's Convenience Outweighs Traditional Learning Barriers
In today's fast-paced world, traditional learning methods often fall short of meeting the demands of busy professionals and multitaskers. Enter Audioread, a cutting-edge text-to-speech platform that transforms written content into audio, enabling users to absorb knowledge anywhere, anytime. By leveraging dead time—like commuting, exercising, or doing chores—Audioread helps users consume up to 10x more content than traditional learning methods.
Breaking Down the Numbers: Volume > Per-Minute Efficiency
The age-old debate about reading versus listening often centers on retention rates: people retain roughly 10% of what they hear compared to 20% of what they read. But this narrow focus misses the bigger picture.
The Math of Learning Efficiency
• Reading: Spend 30 minutes daily reading and retain 20%, gaining 6 minutes of retained knowledge.
• Listening with Audioread: Spend 3 hours daily multitasking while listening and retain 10%, gaining 18 minutes of retained knowledge—three times more.
This 'volume advantage' is why high performers increasingly prioritize audio learning. A 2024 study in Educational Technology found that adults who listened to audiobooks while multitasking absorbed 2.5x more content monthly than dedicated readers, despite slightly lower per-session retention.
'The real productivity unlock isn't about squeezing more out of every minute you read—it's about turning all the minutes you're not reading into learning time. That's what Audioread makes possible.' - Ryan, CEO of Audioread
Why Listening Works: The Neuroscience of Convenience
Critics often argue that listening is passive, but modern research tells a different story:
• Dual-coding theory: Audio stimulates auditory processing while freeing the brain to visualize concepts, enhancing creativity.
• Emotional engagement: Narrators' tone and pacing boost empathy and narrative recall, making complex ideas stickier.
• Repetition without friction: Relistening to an Audioread file during a walk is easier than rereading a dense PDF.
A 2025 University of Waterloo study confirmed that audio learners recall 27% more details from non-fiction than readers as the brain attaches information to real-world contexts (e.g., 'I learned this while cooking').
Audioread's Game-Changing Features
While other platforms offer text-to-speech options, Audioread stands out with three key features designed for seamless learning:
1. Effortless Content Integration
• Subscribe to RSS feeds to auto-convert newsletters and blogs into audio.
• Forward PDFs, reports, or emails to your Audioread address for instant playlist access.
• Drag-and-drop textbooks or scanned documents for quick conversion.
2. Centralized Knowledge Hub
• Sync Audioread with Spotify, Apple Podcasts, Overcast, and other podcast players.
• Create playlists combining morning news podcasts, work reports, and evening novels—all sorted by priority or topic.
3. Retention-Boosting Tools
• Adjustable playback speed for technical or familiar content.
• AI summaries for quick refreshers before diving back into an article.
• Offline access for uninterrupted learning during flights or remote work.
Real-Life Wins: How Audioread Users Outlearn Readers
Case Study 1 – The Busy Executive
Challenge: A two-hour daily commute leaves no time for industry reports.
Solution: Convert 50-page PDFs into audio files; listen at 1.5x speed during drives.
Result: Completes 15 reports/month compared to just three when reading.
Case Study 2 – The Lifelong Learner
Challenge: Struggles to sit still but wants to read 100 books/year.
Solution: Listen to three books/week while gardening or cleaning.
Result: Finishes 140 books/year by leveraging repetition and multitasking.
Debunking the '10% Myth'
The infamous 'Learning Pyramid' (5% lecture retention, 10% reading) is based on outdated theories from the 1940s — not modern neuroscience. Contemporary studies show:
• No significant retention difference between reading and listening for non-fiction content.
• Audio learners often outperform readers in applied scenarios (e.g., discussing concepts).
As one Audioread user said, 'I've listened to over 300 business books since starting Audioread. Even at 10% retention, that's like absorbing the key ideas from 30 books—far more than I could ever read.'
Ready to transform your downtime into productive learning?
Visit Audioread.com today and start your free trial—your podcast app will thank you.
About Audioread
Audioread is an innovative text-to-speech platform designed to help users maximize their learning potential by converting written content into audio files that can be listened to on their webapp or synced with any podcast player. Whether it's newsletters, PDFs, e-books or articles, Audioread makes knowledge accessible anytime, anywhere—empowering users to learn more while doing less.
Media Contact:
Ryan Walter
CEO
Email: [email protected]
Phone: +1 (951) 666-3443Media Contact
Company Name: Audioread
Contact Person: Ryan Walter - CEO
Email: Send Email
Phone: +1 (951) 666-3443
Country: United States
Website: https://audioread.com
Source: PR Gun

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 hours ago
- Yahoo
Is ChatGPT killing higher education?
What's the point of college if no one's actually doing the work? It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything. We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let's talk about how students are cheating today. How are they using these tools? What's the process look like? It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa. Unless you're incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What's the general professor's perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally. And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.' Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside? I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, . And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh? I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic? I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking. It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas. I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram. Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.


Time Business News
16 hours ago
- Time Business News
How Apps Like Calendia Are Revolutionizing the Business of Media Consumption
In a world overflowing with content, staying updated on the latest music albums, TV shows, movie releases, and book launches can feel overwhelming. This is where smart technology steps in—and one standout example is Calendia – media tracker, a sleek, all-in-one media tracker app that's changing how we manage our digital entertainment. This new wave of apps isn't just helping users; it's also shifting how businesses in the media and entertainment space think about discovery, engagement, and release strategies. Calendia is a mobile app designed to help users track and manage upcoming releases across multiple media types: music, movies, TV shows, video games, books, and even authors and actors. Instead of using five different apps or setting manual reminders, Calendia brings everything into one personalized calendar. Users can: Follow artists, shows, books, and games Get release notifications Discover new media based on preferences Import their favorite artists from Spotify and Apple Music In short, Calendia is not just a calendar—it's a media discovery hub. The entertainment industry is more fragmented than ever. With dozens of streaming platforms, new album drops every Friday, and a growing indie book and gaming scene, the challenge isn't access—it's discovery. This is where apps like Calendia offer serious value, both to users and to content creators. For Users: Convenience: No need to hunt down release dates or updates from multiple sources. Personalization: Only get notified about what you truly care about. Control: Users choose how and when they get updates, making the experience feel custom-made. For the Industry: Valuable Data : Apps like Calendia gain insight into what users are following, what trends are emerging, and how preferences shift over time. : Apps like Calendia gain insight into what users are following, what trends are emerging, and how preferences shift over time. Increased Engagement : When users never miss a release, they're more likely to stay loyal to artists, shows, or franchises. : When users never miss a release, they're more likely to stay loyal to artists, shows, or franchises. Direct Channels: If integrated with marketing, these apps can serve as direct lines between creators and fans. There are a few media tracking tools on the market, but Calendia sets itself apart through its clean interface, broad content support, and smart integrations. Here are a few things that make it shine: 1. All-in-One Functionality While other apps might focus only on TV shows or books, Calendia handles everything in one place. Whether it's an upcoming PlayStation game or a new Netflix series, it's on your radar. 2. Spotify & Apple Music Sync Users can import artists from their music libraries and playlists. That means Calendia instantly knows your taste and starts working from day one—no setup needed. 3. Release Calendar with Notifications Calendia doesn't just show release dates. It reminds you—based on your preferences—when something's about to drop. You can even customize how far in advance you want to be notified. 4. Premium Experience (Optional) For users who want more power, Calendia offers a premium version with features like: Unlimited follows Advanced notifications (split drops, early access) Custom feed filtering Apps like Calendia are more than just handy tools. They represent a shift in how we consume media. In the past, entertainment companies controlled release timelines and announcements. Today, power is moving toward the user. Calendia puts the control back into the hands of the audience: Want to track only sci-fi shows and jazz albums? You can. Prefer to ignore book releases for now? Just hide that category. This kind of flexibility is what modern users expect—and Calendia delivers. The entertainment business is becoming more personalized. People want tailored recommendations, instant notifications, and one central place to manage it all. Calendia meets that demand perfectly. Whether you're a casual music listener, a binge-watcher, a bookworm, or a gamer, this app ensures you'll never miss a release again. But more importantly, it's a glimpse into the future of how businesses and consumers will interact with content. If you haven't tried it yet, Calendia is available on Android as well: 👉 Download on the Android store TIME BUSINESS NEWS


Vox
a day ago
- Vox
Is ChatGPT killing higher education?
What's the point of college if no one's actually doing the work? It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything. We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let's talk about how students are cheating today. How are they using these tools? What's the process look like? It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa. Unless you're incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What's the general professor's perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally. And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.' Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside? I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out. And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh? I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, Are we sure we're not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic? I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking. It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas. I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram.