
Too Good To Be Human? AI's Surprising Bias Against Quality Writing
The first Turing Test may have been conducted at the ball in My Fair Lady. Professor Higgins has wagered with his friend, Pickering, that he can transform a flower girl into a lady through the science of language. He knows that people judge others by their manner of speech , and he'll use his skills as a professor of elocution to pull off the ruse.
The final test comes when a rival professor conducts his own appraisal of Eliza on the dance floor. His verdict: 'She is a fraud!' His logic is captured in the song, 'You Did It!'
Artificial Intelligence has faced a similar 'fool the inspector' challenge since Alan Turing first posed his famous test in a 1950 paper titled,'Computing Machinery and Intelligence.' Turing's very practical test proposes that a computer is intelligent if a person cannot distinguish between the computer and another person during an online chat.
Many experts believe we've passed Turing's test with generative AI models. The latest version of Claude (Claude Sonnet 3.7) was just released, and it writes remarkably well. I provided Claude with an outline for an article, including the key points to stress, along with an interview text, and it wrote a clear, interesting, coherent article. It was (almost) indistinguishable from something that I might have written.
I decided to try a reverse Turing test. My question was whether other AIs thought a given article was written by a person or by an AI. Gemini was certain the article I gave it was written by an AI. ChatGPT thought it plausible that the article was written by either a human or a machine (or a combination of both). Claude credited the human.
Interested, I put six of my Forbes columns through the test by asking, 'Was this written by an AI?' The articles are, of course, written by a human (me). In five out of six cases, Gemini thought they were AI-written. The model was transparent about its logic and about the 'tells' it uses to identify AI-written text. Several of these fit the category of what might be called good writing: structured argumentation; use of data and statistics; referencing sources; focus on practical solutions; and a concluding call to action. Ironically, these are the aspirations of many an essay writer! In some cases, unfortunately, Gemini also found that the writing 'lacks a distinct personality or voice…which is often characteristic of AI-generated text.' Oh, well.
Gemini's summary for the article How to Jump Start Learning At Work was:
It reminded me of the song from My Fair Lady: 'This writing is too good, it said. That clearly indicates that it is AI…'
ChatGPT seemed confused. It considered three authorship possibilities for each article: Purely AI-Generated; Human + AI Collaboration; and Purely Human-Written. In most cases, it favored a human collaborating with an AI, but it hedged by finding that all three options were plausible in five out of six cases.
Claude identified half of the articles as 'indeterminate' and half as human-generated (phew!). It did this based on the presence of personal voice, individual experience cited in the article, and the nuance of the argument (perceived by the first so-called third-generation LLM). Its summary, for the article cited above, was:
A few observations.
1. AI generally assumes thatwell-written articles are written by an AI. In other words, AI has a low regard for human-written text!
2. AI-written text is, indeed, getting very good. We should use it where we can to make writing better – but without delegating the thinking. AI will increase both the efficiency and clarity of business communications.
3. There will be an art to the collaboration between AIs and people as they work together to create good writing. The partnership is likely to involve iteration and the use of several tools. The best way of learning to do this will be by doing.
The ability of AI to write well creates another challenge. Content that sounds good but which is entirely derivative will become very easy to create (and to promote using AI). It will be easy to become even more overwhelmed by marginally useful information.
For centuries, we lived in a curated media world, where content was scarce and editors were in control. That world was disrupted in less than a generation by user-generated content like blogs, podcasts, and YouTube, which began to overwhelm our ability to process them. AI will move us into another era, one in which the volume of this user-generated content has increased so dramatically that it will inevitably alienate readers.
What will be the consequence? I think that people's media preferences will revert from open, public content to curated, paid content. The model originally spawned from scarcity is likely to be recovered as a consequence of abundance (or a scarcity of attention).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Los Angeles Times
2 hours ago
- Los Angeles Times
Companies keep slashing jobs. How worried should workers be about AI replacing them?
Tech companies that are cutting jobs and leaning more on artificial intelligence are also disrupting themselves. Amazon's Chief Executive Andy Jassy said last month that he expects the e-commerce giant will shrink its workforce as employees 'get efficiency gains from using AI extensively.' At Salesforce, a software company that helps businesses manage customer relationships, Chief Executive Marc Benioff said last week that AI is already doing 30% to 50% of the company's work. Other tech leaders have chimed in before. Earlier this year, Anthropic, an AI startup, flashed a big warning: AI could wipe out more than half of all entry-level white-collar jobs in the next one to five years. Ready or not, AI is reshaping, displacing and creating new roles as technology's impact on the job market ripples across multiple sectors. The AI frenzy has fueled a lot of anxiety from workers who fear their jobs could be automated. Roughly half of U.S. workers are worried about how AI may be used in the workplace in the future and few think AI will lead to more job opportunities in the long run, according to a Pew Research Center report. The heightened fear comes as major tech companies, such as Microsoft, Intel, Amazon and Meta cut workers, push for more efficiency and promote their AI tools. Tech companies have rolled out AI-powered features that can generate code, analyze data, develop apps and help complete other tedious tasks. 'AI isn't just taking jobs. It's really rewriting the rule book on what work even looks like right now,' said Robert Lucido, senior director of strategic advisory at Magnit, a company based in Folsom, Calif., that helps tech giants and other businesses manage contractors, freelancers and other contingent workers. Exactly how big of a disruption AI will have on the job market is still being debated. Executives for OpenAI, the maker of popular chatbot ChatGPT, have pushed back against the prediction that a massive white-collar job bloodbath is coming. 'I do totally get not just the anxiety, but that there is going to be real pain here, in many cases,' said Sam Altman, chief executive of OpenAI, at an interview with 'Hard Fork,' the tech podcast from the New York Times. 'In many more cases, though, I think we will find that the world is significantly underemployed. The world wants way more code than can get written right now.' As new economic policies, including those around tariffs, create more unease among businesses, companies are reining in costs while also being pickier about whom they hire. 'They're trying to find what we call the purple unicorns rather than someone that they can ramp up and train,' Lucido said. Before the 2022 launch of ChatGPT — a chatbot that can generate text, images, code and more —tech companies were already using AI to curate posts, flag offensive content and power virtual assistants. But the popularity and apparent superpowers of ChatGPT set off a fierce competition among tech companies to release even more powerful generative AI tools. They're racing ahead, spending hundreds of billions of dollars on data centers, facilities that house computing equipment such as servers used to process the trove of information needed to train and maintain AI systems. Economists and consultants have been trying to figure out how AI will affect engineers, lawyers, analysts and other professions. Some say the change won't happen as soon as some tech executives expect. 'There have been many claims about new technologies displacing jobs, and although such displacement has occurred in the past, it tends to take longer than technologists typically expect,' economists for the U.S. Bureau of Labor Statistics said in a February report. AI can help develop, test and write code, provide financial advice and sift through legal documents. The bureau, though, still projects that employment of software developers, financial advisors, aerospace engineers and lawyers will grow faster than the average for all occupations from 2023 to 2033. Companies will still need software developers to build AI tools for businesses or maintain AI systems. Tech executives have touted AI's ability to write code. Meta Chief Executive Mark Zuckerberg has said that he thinks AI will be able to write code like a mid-level engineer in 2025. And Microsoft Chief Executive Satya Nadella has said that as much as 30% of the company's code is written by AI. Other roles could grow more slowly or shrink because of AI. The Bureau of Labor Statistics expects employment of paralegals and legal assistants to grow slower than the average for all occupations while roles for credit analysts, claims adjusters and insurance appraisers to decrease. McKinsey Global Institute, the business and economics research arm of the global management consulting firm McKinsey & Co., predicts that by 2030 'activities that account for up to 30 percent of hours currently worked across the US economy could be automated.' The institute expects that demand for science, technology, engineering and mathematics roles will grow in the United States and Europe but shrink for customer service and office support. 'A large part of that work involves skills, which are routine, predictable and can be easily done by machines,' said Anu Madgavkar, a partner with the McKinsey Global Institute. Although generative AI fuels the potential for automation to eliminate jobs, AI can also enhance technical, creative, legal and business roles, the report said. There will be a lot of 'noise and volatility' in hiring data, Madgavkar said, but what will separate the 'winners and losers' is how people rethink their work flows and jobs themselves. Tech companies have announced 74,716 cuts from January to May, up 35% from the same period last year, according to a report from Challenger, Gray & Christmas, a firm that offers job search and career transition coaching. Tech companies say they're slashing jobs for various reasons. Autodesk, which makes software used by architects, designers and engineers, slashed 9% of its workforce, or 1,350 positions, this year. The San Francisco company cited geopolitical and macroeconomic factors along with its efforts to invest more heavily in AI as reasons for the cuts, according to a regulatory filing. Other companies such as Oakland fintech company Block, which slashed 8% of its workforce in March, told employees that the cuts were strategic not because they're 'replacing folks with AI.' Diana Colella, executive vice president, entertainment and media solutions at Autodesk, said that it's scary when people don't know what their job will look like in a year. Still, she doesn't think AI will replace humans or creativity but rather act as an assistant. Companies are looking for more AI expertise. Autodesk found that mentions of AI in U.S. job listings surged in 2025 and some of the fastest-growing roles include AI engineer, AI content creator and AI solutions architect. The company partnered with analytics firm GlobalData to examine nearly 3 million job postings over two years across industries such as architecture, engineering and entertainment. Workers have adapted to technology before. When the job of a door-to-door encyclopedia salesman was disrupted because of the rise of online search, those workers pivoted to selling other products, Colella said. 'The skills are still key and important,' she said. 'They just might be used for a different product or a different service.'


Vox
2 hours ago
- Vox
Is ChatGPT killing higher education?
What's the point of college if no one's actually doing the work? It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything. We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let's talk about how students are cheating today. How are they using these tools? What's the process look like? It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa. Unless you're incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What's the general professor's perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally. And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.' Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside? I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out. And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh? I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, Are we sure we're not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic? I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking. It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas. I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram.

Wall Street Journal
2 hours ago
- Wall Street Journal
The Companies Betting They Can Profit From Google Search's Demise
A new crop of startups are betting on the rapid demise of traditional Google search. At least a dozen new companies are pouring millions of dollars into software meant to help brands prepare for a world in which customers no longer browse the web and instead rely on ChatGPT, Perplexity and other artificial-intelligence chatbots to do it for them.