
In the age of AI, content is everywhere — but are we still telling stories that matter?
Last week on Substack, AI marketing expert Charlie Hills penned a sharp, clear-eyed provocation: Content is Dead. Long Live Connection. It caught me off guard and held me there.
Not because content is dead (it's not – it's alive, omnipresent, flooding our screens in formats we couldn't have imagined five years ago), but because Charlie is right to point us towards the deeper issue: connection.
Content might be multiplying, but is it connecting? Is it resonating in the way great stories once did – not just engaging but anchoring us?
That question has never been more urgent. We're living through a supercharged shift – a reformatting of reality – as artificial intelligence enters its next act. The race to create has become a sprint. With just a few prompts, almost anyone can make anything. Art. Music. Dialogue. Essays. Sales decks. Songs. Films. It's dazzling.
And yet – it's also flattening.
Because what we're seeing now isn't just a technological leap. It's a philosophical one. One where the difference between originality and replication, between human thought and predictive patterning, is collapsing in plain sight.
The race to the bottom of the brainstem 2.0
Back in 2017, Tristan Harris – former Google design ethicist turned activist and leading voice of the Netflix documentary The Social Dilemma – warned of a phenomenon he called the race to the bottom of the brainstem.
Platforms weren't just competing for time or clicks, he argued. They were competing for the most primitive parts of our brain: our instincts, our fears, our compulsions. Whatever could trigger outrage or anxiety – that's what won the attention war.
The consequences are now well known: fractured focus, polarisation, social fatigue, and perhaps most disturbingly, a generation trained to skim, not think.
But what's unfolding now feels like a spiritual sequel to that. A new race. This time, not towards the base of the brain, but towards the end of originality.
Because AI doesn't think. It predicts.
It doesn't dream or deliberate or dissent. It calculates likelihoods – what's most probable, based on patterns of the past. It does this with extraordinary sophistication, but its engine is built not on insight, but inference.
And so, just as Harris warned us about platforms hijacking attention, today we face the subtle creep of something just as worrying: the erosion of human uniqueness through the mass rehashing of content.
Is content dead? Its intentions appear warped
To be clear: content isn't dead. It's thriving in volume. We have never had so much access to ideas, essays, videos, podcasts and posts. But the question is no longer how much, but how meaningful.
Because when content is generated by tools designed to mimic – not originate – we have to confront a brutal truth: intention matters.
In the past, 'content' was less polished but had raw, unpolished soul. We sought stories – in church, around the fire, at the pub – to make sense of life. It feels like now we are just 'wading through' a river that has broken its banks.
Today, we scroll. We skim. And increasingly, we wonder: who wrote this? Did anyone?
The horse bolted. What now?
Perhaps the horse has indeed bolted. The tools are here. Anyone can now produce passable content – indistinguishable at a glance from the real thing. But 'passable' is not the same as 'powerful'.
And this is where our opportunity lies.
It is possible – and urgent – to use AI not to replace creativity, but to amplify time. Let the machines handle the mundane. Let them assist, support, scaffold. And then use the time you get back, not to produce more noise, but to reconnect with the very essence of being human. It's not easy when someone hands you a magic wand to stop tapping things, I know.
But time is a wonderful thing. Time to think, to read or for a long-overdue lunch with friends.
The greatest threat AI poses is not to employment, but to enchantment – to the spontaneity and serendipity that defines art, love, humour and originality. And we must see that our kids will need this mentorship in connection and relationship as time goes on.
LinkedIn's demise
Nowhere is this shift more visible than on LinkedIn.
What once felt like a platform for raw professional reflection – real people, real ideas – is slowly becoming an uncanny valley of templated inspiration and machine-stitched leadership.
You can sense the AI-ness. The bland polish. The synthetic sincerity. It sounds right, but it feels wrong.
The solution isn't to quit. It's to reclaim tone. To sound like yourself. To say things that don't sound like anyone else could say them. To use tools for acceleration, but let you do the speaking.
The real winners in this AI era will not be those who extract the purest margins or fastest output. They'll be those who master the balance: AI and humanity. Efficiency and empathy. Data and depth.
They'll be the ones who deploy the extra time AI gives them to become more human – not less.
That could mean more family time. Or mentoring. Or writing something messy and bold. Or just watching a film without checking your phone. Whatever it is, it's the redeployment of time towards meaning that will mark the new creative class.
Final thoughts: A call to be seen
This isn't a Luddite's lament. This is a call to awareness.
We're at a cultural fork in the road. AI is here, and it's extraordinary. But it is not us. It can assist, but it cannot replace. And if we let it mimic the soul out of our storytelling, we will look back and realise we lost something irreplaceable – the texture of being alive.
So no, content isn't dead. It's dynamic and alive, its production value is better than ever, but will anyone care if they think it's fake.
We will be digging harder than ever for connection. That's the new gold in marketing over the next few years. Much harder to mine, but more valuable than ever.
My business partner and CEO, Mike Butler, doesn't lean on AI much at all. He doesn't need to. He is a master of caring about people's outcomes, asking questions, probing for strategic insights and value and for nurturing relationships. He unlocks the most value in our AI business, ironically. Our entire team would agree. I would have my kids use AI well but drive with Mike's foundation for how to relate to people.
Let's not settle for attention. Let's fight for connection, which will mean that humans adapt to bring the elements of what we see, what we read between the lines, by dialling up human insight and relationships and humour. DM

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
3 hours ago
- IOL News
Cloned - Researchers say using ChatGPT can rot your brain. The truth is a little more complicated
Can Chat GPT lead to the demise of critical thinking, or is it simply that users don't put any critical thinking into their use of Chat GPT? Image: Supplied Vitomir Kovanovic and Rebecca Marrone Since ChatGPT appeared almost three years ago, the impact of artificial intelligence (AI) technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty? Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving. Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'. So what did the study find? The difference between using AI and the brain alone Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays. The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them. Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session. The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups. Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad loading Does this really show AI makes us stupider? These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study. The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly. When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session. To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI. Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics. As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written. What are the implications of AI in assessment? To understand the current situation with AI, we can look back to what happened when calculators first became available. Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks. Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available. The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago. In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in 'metacognitive laziness'. However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination. In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them. The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye.

TimesLIVE
6 hours ago
- TimesLIVE
Google's AI Overviews hit by EU antitrust complaint from publishers
Alphabet's Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May. The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers. The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search. "Google's core search engine service is misusing web content for Google's AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss," the document said. It said Google positions its AI Overviews at the top of its general search engine results page to display its own summaries which are generated using publisher material and it alleges that Google's positioning disadvantages publishers' original content.

TimesLIVE
9 hours ago
- TimesLIVE
African creatives gear up for crucial conversation about the future of storytelling
The future of African storytelling is stepping into the digital age and artificial intelligence (AI) is taking centre stage. Set to take place on August 8 at the Protea Hotel in Midrand, the AI in Film: Empowering African Filmmakers conference will gather some of the continent's most influential voices in tech and the arts to explore how AI is shaping the future of African cinema. Speakers include actor and producer Tumisho Masha, AI consultant Johan Steyn and media personalities Penny Lebyane and Masechaba Mposwa, who will facilitate discussions with actors' unions, tech experts, broadcasters and industry stakeholders. Under the theme 'AI's Role in African Storytelling's Future', the one-day conference will unpack how AI is changing the way stories are scripted, visualised and distributed while raising critical questions about ownership, authenticity and cultural preservation. 'This is a long overdue conversation for African creatives. Our industry shouldn't sit on the sidelines while global AI trends reshape the creative economy. We must lead with agency and ethical innovation that reflects our cultural voice,' said co-founder Mfundo Ntsibande. The event, hosted by digital consultancy Sci-Thagorus, will feature live demos, panel talks and networking masterclasses. Key topics will cover AI's role in scriptwriting and production, ethical boundaries in human-AI collaboration and the thorny issue of copyright in AI-generated content. Founder and CEO Lwazi Sithole said the conference aims to empower creatives with the tools and frameworks to integrate AI responsibly. 'It's about enabling innovation while safeguarding cultural integrity.'