logo
Will AI become God? That's the wrong question.

Will AI become God? That's the wrong question.

Vox07-04-2025
It's hard to know what to think about AI.
It's easy to imagine a future in which chatbots and research assistants make almost everything we do faster and smarter. It's equally easy to imagine a world in which those same tools take our jobs and upend society. Which is why, depending on who you ask, AI is either going to save the world or destroy it.
What are we to make of that uncertainty?
Jaron Lanier is a digital philosopher and the author of several bestselling books on technology. Among the many voices in this space, Lanier stands out. He's been writing about AI for decades and he's argued, somewhat controversially, that the way we talk about AI is both wrong and intentionally misleading.
Jaron Lanier at the Music + Health Summit in 2023, in West Hollywood, California. Michael Buckner/Billboard via Getty Images
I invited him onto The Gray Area for a series on AI because he's uniquely positioned to speak both to the technological side of AI and to the human side. Lanier is a computer scientist who loves technology. But at his core, he's a humanist who's always thinking about what technologies are doing to us and how our understanding of these tools will inevitably determine how they're used.
We talk about the questions we ought to be asking about AI at this moment, why we need a new business model for the internet, and how descriptive language can change how we think about these technologies — especially when that language treats AI as some kind of god-like entity.
As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.
This interview has been edited for length and clarity.
What do you mean when you say that the whole technical field of AI is 'defined by an almost metaphysical assertion'?
The metaphysical assertion is that we are creating intelligence. Well, what is intelligence? Something human. The whole field was founded by Alan Turing's thought experiment called the Turing test, where if you can fool a human into thinking you've made a human, then you might as well have made a human because what other tests could there be? Which is fair enough. On the other hand, what other scientific field — other than maybe supporting stage magicians — is entirely based on being able to fool people? I mean, it's stupid. Fooling people in itself accomplishes nothing. There's no productivity, there's no insight unless you're studying the cognition of being fooled of course.
There's an alternative way to think about what we do with what we call AI, which is that there's no new entity, there's nothing intelligent there. What there is a new, and in my opinion, sometimes quite useful, form of collaboration between people.
What's the harm if we do?
That's a fair question. Who cares if somebody wants to think of it as a new type of person or even a new type of God or whatever? What's wrong with that? Potentially nothing. People believe all kinds of things all the time.
But in the case of our technology, let me put it this way, if you are a mathematician or a scientist, you can do what you do in a kind of an abstract way. You can say, 'I'm furthering math. And in a way that'll be true even if nobody else ever even perceives that I've done it. I've written down this proof.' But that's not true for technologists. Technologists only make sense if there's a designated beneficiary. You have to make technology for someone, and as soon as you say the technology itself is a new someone, you stop making sense as a technologist.
If we make the mistake, which is now common, and insist that AI is in fact some kind of god or creature or entity or oracle, instead of a tool, as you define it, the implication is that would be a very consequential mistake, right?
That's right. When you treat the technology as its own beneficiary, you miss a lot of opportunities to make it better. I see this in AI all the time. I see people saying, 'Well, if we did this, it would pass the Turing test better, and if we did that, it would seem more like it was an independent mind.'
But those are all goals that are different from it being economically useful. They're different from it being useful to any particular user. They're just these weird, almost religious, ritual goals. So every time you're devoting yourself to that, it means you're not devoting yourself to making it better.
One example is that we've deliberately designed large-model AI to obscure the original human sources of the data that the AI is trained on to help create this illusion of the new entity. But when we do that, we make it harder to do quality control. We make it harder to do authentication and to detect malicious uses of the model because we can't tell what the intent is, what data it's drawing upon. We're sort of willfully making ourselves blind in a way that we probably don't really need to.
I really want to emphasize, from a metaphysical point of view, I can't prove, and neither can anyone else, that a computer is alive or not, or conscious or not, or whatever. All that stuff is always going to be a matter of faith. That's just the way it is. But what I can say is that this emphasis on trying to make the models seem like they're freestanding new entities does blind us to some ways we could make them better.
So does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you?
What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot.
Wait, that's a real opinion held by real people?
Many, many people. Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a 'bio baby' because as soon as you have a 'bio baby,' you get the 'mind virus' of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it's much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.
Now, in this particular case, this was a young man with a female partner who wanted a kid. And what I'm thinking is this is just another variation of the very, very old story of young men attempting to put off the baby thing with their sexual partner as long as possible. So in a way I think it's not anything new and it's just the old thing. But it's a very common attitude, not the dominant one.
I would say the dominant one is that the super AI will turn into this God thing that'll save us and will either upload us to be immortal or solve all our problems and create superabundance at the very least. I have to say there's a bit of an inverse proportion here between the people who directly work in making AI systems and then the people who are adjacent to them who have these various beliefs. My own opinion is that the people who are able to be skeptical and a little bored and dismissive of the technology they're working on tend to improve it more than the people who worship it too much. I've seen that a lot in a lot of different things, not just computer science.
One thing I worry about is AI accelerating a trend that digital tech in general — and social media in particular — has already started, which is to pull us away from the physical world and encourage us to constantly perform versions of ourselves in the virtual world. And because of how it's designed, it has this habit of reducing other people to crude avatars, which is why it's so easy to be cruel and vicious online and why people who are on social media too much start to become mutually unintelligible to each other. Do you worry about AI supercharging this stuff? Am I right to be thinking of AI as a potential accelerant of these trends?
It's arguable and actually consistent with the way the [AI] community speaks internally to say that the algorithms that have been driving social media up to now are a form of AI, if that's the term you wish to use. And what the algorithms do is they attempt to predict human behavior based on the stimulus given to the human. By putting that in an adaptive loop, they hope to drive attention and an obsessive attachment to a platform. Because these algorithms can't tell whether something's being driven because of things that we might think are positive or things that we might think are negative.
I call this the life of the parity, this notion that you can't tell if a bit is one or zero, it doesn't matter because it's an arbitrary designation in a digital system. So if somebody's getting attention by being a dick, that works just as well as if they're offering lifesaving information or helping people improve themselves. But then the peaks that are good are really good, and I don't want to deny that. I love dance culture on TikTok. Science bloggers on YouTube have achieved a level that's astonishingly good and so on. There's all these really, really positive good spots. But then overall, there's this loss of truth and political paranoia and unnecessary confrontation between arbitrarily created cultural groups and so on and that's really doing damage.
So yeah, could better AI algorithms make that worse? Plausibly. It's possible that it's already bottomed out and if the algorithms themselves get more sophisticated, it won't really push it that much further.
But I actually think it can and I'm worried about it because we so much want to pass the Turing test and make people think our programs are people. We're moving to this so-called agentic era where it's not just that you have a chat interface with the thing, but the chat interface gets to know you through years at a time and gets a so-called personality and all this. And then the idea is that people then fall in love with these. And we're already seeing examples of this here and there, and this notion of a whole generation of young people falling in love with fake avatars. I mean, people talk about AI as if it's just like this yeast in the air. It's like, oh, AI will appear and people will fall in love with AI avatars, but it's not. AI is always run by companies, so they're going to be falling in love with something from Google or Meta or whatever.
The advertising model was sort of the original sin of the internet in lots of ways. I'm wondering how we avoid repeating those mistakes with AI. How do we get it right this time? What's a better model?
This question is the central question of our time in my view. The central question of our time isn't, how are we able to scale AI more? That's an important question and I get that. And most people are focused on that. And dealing with the climate is an important question. But in terms of our own survival, coming up with a business model for civilization that isn't self-destructive is, in a way, our most primary problem and challenge right now.
Because the way we're doing it, we went through this thing in the earlier phase of the internet of 'information should be free,' and then the only business model that's left is paying for influence. And so then all of the platforms look free or very cheap to the user, but then actually the real customer is trying to influence the user. And you end up with what's essentially a stealthy form of manipulation being the central project of civilization.
We can only get away with that for so long. At some point, that bites us and we become too crazy to survive. So we must change the business model of civilization. How to get from here to there is a bit of a mystery, but I continue to work on it. I think we should incentivize people to put great data into the AI programs of the future. And I'd like people to be paid for data used by AI models and also to be celebrated and made visible and known. I think it's just a big collaboration and our collaborators should be valued.
How easy would it be to do that? Do you think we can or will?
There's still some unsolved technical questions about how to do it. I'm very actively working on those and I believe it's doable. There's a whole research community devoted to exactly that distributed around the world. And I think it'll make better models. Better data makes better models, and there's a lot of people who dispute that and they say, 'No, it's just better algorithms. We already have enough data for the rest of all time.' But I disagree with that.
I don't think we're the smartest people who will ever live, and there might be new creative things that happen in the future that we don't foresee and the models we've currently built might not extend into those things. Having some open system where people can contribute to new models and new ways is a more expansive and just kind of a spiritually optimistic way of thinking about the deep future.
Is there a fear of yours, something you think we could get terribly wrong, that's not currently something we hear much about?
God, I don't even know where to start. One of the things I worry about is we're gradually moving education into an AI model, and the motivations for that are often very good because in a lot of places on earth, it's just been impossible to come up with an economics of supporting and training enough human teachers. And a lot of cultural issues in changing societies make it very, very hard to make schools that work and so on. There's a lot of issues, and in theory, a self-adapting AI tutor could solve a lot of problems at a low cost.
But then the issue with that is, once again, creativity. How do you keep people who learn in a system like that, how do you train them so that they're able to step outside of what the system was trained on? There's this funny way that you're always retreading and recombining the training data in any AI system, and you can address that to a degree with constant fresh input and this and that. But I am a little worried about people being trained in a closed system that makes them a little less than they might otherwise have been and have a little less faith in themselves.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tech companies building massive AI data centers should pay to power them
Tech companies building massive AI data centers should pay to power them

The Hill

time16 minutes ago

  • The Hill

Tech companies building massive AI data centers should pay to power them

The projected growth in artificial intelligence and its unprecedented demand for electricity to power enormous data centers present a serious challenge to the financial and technical capacity of the U.S. utility system. Appreciation for the sheer magnitude of that challenge has gotten lost as forecast after forecast projects massive growth in electric demand over the coming decade. The idea of building a data center that will draw 1 gigawatt of power or more, an amount sufficient to serve over 875,000 homes, is in the plans of so many data center developers and so routinely discussed that it no longer seems extraordinary. The challenge, when viewed in the aggregate, may be overwhelming. A recent Wood Mackenzie report identified 64 gigawatts of confirmed data center related power projects currently on the books with another 132 gigawatts potentially to be developed. 64 gigawatts are enough to power 56 million homes — more than twice the population of the 15 largest cities in America. The U.S. electric utility system is struggling to meet the projected energy needs of the AI industry. The problem is that many utilities do not have the financial and organizational resources to build new generating and transmission facilities at the scale and on the data center developers' desired timeline. The public policy question now on the table is who should pay for and bear the risk for these massive mega-energy projects. Will it be the AI developers such as Amazon, Microsoft, Meta and Alphabet — whose combined market value is seven times that of the entire S&P 500 Utility Sector — or the residential and other customers of local electric utilities? The process to answer this and related questions is underway in the hallways of the U.S. Congress, at the Federal Energy Regulatory Commission and other federal agencies, in tariff proceedings before state regulatory authorities and in public debate at the national, state and local levels. Whether they are developed at the federal, state or local level, the following values and objectives should form the core of public policy in this area: Data centers developers that require massive amounts of electric power (e.g. above 500MW or another specified level) should be required to pay for building new generating and transmission facilities. The State of Texas recently enacted legislation that requires data centers and other new large users to fund the infrastructure necessary to serve their needs. Although it is customary to spread the cost of new facilities across the user base of a utility, the demands that data center developers are placing on utility systems across the country are sufficiently extraordinary to justify allocating the costs of new facilities to those developers. Moreover, data center developers have the financial resources to cover those costs and incorporate them into the rates charged to users of their AI services. The developers of large data centers should bear the risk associated with new utility-built generating and transmission facilities, not the utility. As an example of such a policy, the Public Utility Commission of Ohio just approved a compromise proposed by American Electric Power of Ohio that would require data centers with loads greater than 1 gigawatt and mobile data centers over 25 megawatts to commit to 10-year electric service contracts and pay minimum demand charges based on 85 percent of their contract capacity, up from 60 percent under the utility's current general service tariff. Another option included in the Texas legislation requires significant up-front payments early in the planning process and mandates that data center developers disclose where they may have simultaneously placed demands for power. It is not unusual for data center requests for service to be withdrawn once they decide on the best location and package of incentives. Data center developers have the financial capacity and ability to manage this risk, utilities do not. Generating facilities that are co-located at large data centers should be integrated with the local utility electric grid, with appropriate cost allocation. Although a few projects have examined the option of a co-located power generation 'island' fully independent of the grid, most projects intend to interconnect with the grid system for back-up power and related purposes. Properly managed, this interconnection could be advantageous for both the data center and the utility system, provided that costs are appropriately allocated across the system. The U.S. government should continue to support the development of nuclear technology, including small modular reactors. U.S. utilities do not have the financial resources to assume the risk of building new nuclear-powered generating facilities. The emergence of a new set of customers, data center developers with enormous needs for electric power and deep pockets, changes the equation. The U.S. government has provided billions of dollars of support for new nuclear technologies and should continue to do so for the purpose of bringing their costs down. The U.S. government should continue to support energy efficiency improvements at data centers. Data centers use massive amounts of power for running servers, cooling systems, storage systems, networking equipment, backup systems, security systems and lighting. The National Renewable Energy Laboratory has developed a 'handbook' of measures that data centers can implement to reduce energy usage and achieve savings. In addition, there now are strong market forces to develop new super-efficient chips that will lower the unit costs of training and using AI models. The U.S. government should help accelerate the development of these chips given their leverage on U.S. electricity demand. The stakes in this public policy debate over our energy future could not be higher. If we get these policies right, AI has the potential to remake the U.S. economy and the energy infrastructure of this country. If we get it wrong, the push to build new generating and transmission facilities to provide gigawatts of power has the potential to overwhelm the financial and operational capacity our electric utility system, impose burdensome rate increases on homeowners and businesses, undercut efforts to reduce the use of fossil fuels to meet climate-related goals and compromise the reliability of our electricity grid for years to come. David M. Klaus is a consultant on energy issues who served as deputy undersecretary of the U.S. Department of Energy during the Obama administration and as a political appointee to two other Democratic presidents. Mark MacCarthy is the author of 'Regulating Digital Industries' (Brookings, 2023), an adjunct professor at Georgetown University's Communication, Culture & Technology Program, a nonresident senior fellow at the Institute for Technology Law and Policy at Georgetown Law and a nonresident senior fellow at the Brookings Institution.

7 Business Lessons For AI
7 Business Lessons For AI

Forbes

time17 minutes ago

  • Forbes

7 Business Lessons For AI

From above photo of an anonymous African-American woman analyzing business graph on a laptop ... More computer while sitting at restaurant desk with notebook, pen and eyeglasses. When considering any implementation of AI in a business, leadership teams have a weighty responsibility. This is an approach that people want to get right. They face a few challenges – that the technology is so nascent, that there doesn't seem to be a lot of road maps available for companies, and that many people instinctively distrust large language models to automate processes. So what's to be done? A Leader's Perspective Here's where I recently got some insights from a paper written by Lidiane Jones, who was previously head of Slack, and CEO of Bumble, a major dating platform. Jones breaks down some of the aspects of AI implementation that C-suite people are looking at. Data Transfers and Governance Jones points out that transformations like ETL (extract, transform, load) and ELT (extract, load, transform) predated AI, but data is still siloed in many cases. One solution Jones touts is an 'omnichannel data strategy' – this, she writes, 'will ensure privacy and security of your data, ease of access for business applications, offer real time capabilities and can integrate with your everyday tools.' Compliance with Financial Data Rules For example, Jones speaks about the need to focus on compliance in some areas. 'Every company has critical financial data, subject to audit, regulation and compliance that must be carefully protected,' she writes. 'Normally, for more scaled companies, this data sits on an ERP system. Every CEO, CFO, COO and CRO needs critical real-time insight from these systems, to determine how the business is performing against plans, how expenses are tracking against the budget or how a change in employee investment … will affect the overall cost structure and velocity of the business, among numerous other capital allocation considerations.' Business Intelligence for the Win In terms of general business intelligence, Jones spins a story to illustrate: 'Imagine a Sales Executive who develops a multi-year high trust relationship with one of a company's most important large customer, and she decides to leave the company for a better career opportunity,' she writes. 'Historically, though there will be good information about that customer and notes from this leader, much of her institutional knowledge leaves with her. Corporate human knowledge exists within organizations, and is shaped by the culture, people and business processes.' She then addresses the role of workflow tools and other platform resources. 'Collaboration software of all kinds like Slack, Google Workspace and Teams … have a lot of people's knowledge embedded in them that is hardly ever nurtured,' she adds. 'Unstructured data like this is highly effective in training LLMs, and can provide opportunities that haven't existed before - like capturing the sentiment of what this large customer loved the most about their relationship with this Sales Executive.' She also gave a nod to the potential difficulties, conceding that ' it might feel daunting to expand data strategy planning to be as broad as this,' but notes that partnering with vendors and other firms can help. 'Phasing and prioritizing how you bring more of your data into a single system is key to making progress and capturing business value along the way,' she writes. Agents do the Agenting Jones also makes an important point about the use of AI agents. It goes sort of like this: we're used to computers doing calculations, and digesting and presenting information, but these new systems can actually brainstorm on their own to change processes. 'In many instances, agents can optimize workflows themselves as they determine more effective ways to get the work done,' she writes. A Story of Implementation Citing ChatGPT's meteoric rise, Jones talked about using these technologies in the context of her work at Slack, which is, after all, a business communication tool. She chronicled the firm's connection with companies like OpenAI circa 2017. 'At the time, when I was leading Slack, it was exciting to collaborate with OpenAI, Cohere and Anthropic to use their LLMs to help our customers with some of the most challenging productivity challenges at Slack,' she writes. The challenges she enumerates: 'finding a conversation they knew they had but couldn't remember in what channel, or help customers manage the large amount of messages they received with summaries and prioritization, optimize search for information discovery and so much more.' Then, too, the company created tools. 'We introduced Slack Canvas based templates to help our customers quickly create content based on their corporate information, and captured Huddles' meeting notes and action items, and that was just the beginning,' she explains. 'The capabilities of LLMs gave us the opportunity to solve real-world customer challenges in a pleasant and insightful way, while maintaining the experience of the Slack brand.' Calling this kind of thing the 'table stakes' of the new corporate world, Jones walks us through a lot of the way stations on the path to what she calls 'co-intelligence.' That includes workflow automation, agentic AI, multi-agent systems, and new interfaces. Our AI Brethren Here's one way that Jones characterizes managing an AI: 'Considering autonomous agents as truly 'digital workers' can be a helpful framing for questions we already think of today with 'human workers' like: how does the employer track the quality of the work done? What systems does the digital worker have access to? If the company is audited, how do we track what steps and actions were taken by the digital worker? If the digital worker's actions turn malicious, how do we terminate the agent?' As for the extent of agent autonomy, Jones suggests that fully autonomous agents will be able to handle a complex or 'scoped' job on their own, conceding, though, that 'even an autonomous agent, like a human, needs a job scope and definition - or a set of instructions - on the job at hand.' This new world is one we will have to reckon with soon. Four Principles of Leadership Jones finished with a set of ideas for those who are considering these kinds of deployments. 1. Be hands-on: as a leader, stay close to what's happening 2. This one goes back to prior points: working with vendors and partners is a plus 3. Build an AI-first culture with AI-native projects 4. Find the value for your company I found this to be pretty useful for someone who is contemplating a big move in the age of AI. Some of the best ideas for leadership can be gleaned from TED talks, conferences, and these kinds of personal papers on experience with the industry.

Would you ever swap human artists for AI in your playlist
Would you ever swap human artists for AI in your playlist

Fox News

time17 minutes ago

  • Fox News

Would you ever swap human artists for AI in your playlist

Psychedelic rock band The Velvet Sundown has over a million monthly listeners on Spotify and earns thousands of dollars every month. However, the catch is that it's not a traditional band at all. It's mostly made by artificial intelligence. Their Spotify bio confirms that the group is a synthetic music project, guided by human creative direction but composed, voiced, and visualized using AI. This is a sign of where music may be headed. This revelation has sparked heated debate within the music industry. Some people see it as an exciting new frontier for creativity. Others see it as a threat to everything music has traditionally stood for: originality, emotion, and human expression. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Platforms like Suno and Udio now allow users to generate original songs with just a few prompts. These tools handle everything, vocals, instruments, structure, with startling sophistication. The Velvet Sundown reportedly earned more than $34,000 in a single month from streaming platforms. And it's not alone. Other acts, such as Aventhis, a "dark country" musician with over 600,000 monthly listeners, are also believed to be powered by AI-generated content. This isn't happening in a studio with a team of producers. It's often just one person inputting text and outputting tracks. The barrier to entry is nearly gone. With a laptop and internet connection, anyone can create and distribute AI-generated songs on a massive scale. Major record labels are pushing back. Universal Music Group, Sony Music, and Warner Records have filed lawsuits against AI music platforms, accusing them of using copyrighted material without permission during the training process. At the same time, music creators and advocacy groups are demanding regulation. They want AI-generated tracks to be clearly labeled. They're also calling for updated copyright protections to prevent the misuse of human-created work during AI training. Streaming platforms are starting to acknowledge the issue. Deezer revealed that nearly one in five new uploads are entirely AI-generated. This trend is growing and reshaping the very idea of what it means to be a musician today. For emerging musicians, the rise of AI is deeply frustrating. Tilly Louise, an alternative pop artist based in the UK, has amassed millions of streams but still works a full-time job because her music doesn't generate enough income. Watching an AI-generated band pull in massive streaming numbers feels like being pushed aside by something that isn't even real. She's not alone in this sentiment. Many artists feel overwhelmed by an industry that increasingly rewards volume and virality over authenticity and hard work. Some critics warn that AI-generated art dilutes genuine human expression, believing it floods the internet with hollow content, making it harder for listeners to form a genuine connection through music. Not everyone sees AI as the enemy. Grammy-winning producer Timbaland recently launched a venture called Stage Zero, which plans to spotlight AI-generated pop stars. Music schools are also adapting. Educators are now teaching students how to utilize AI tools to enhance their creative process, rather than avoiding them. Still, even those who are optimistic about the technology admit that it could completely upend the music business. As AI-generated content becomes increasingly easy to produce, the competition for listener attention and streaming dollars will intensify. Nobody knows exactly what the future will look like, but the direction is clear: AI is no longer on the fringes. It's already in the mainstream. AI is no longer just supporting music creation; it is actively creating music that listeners are streaming and enjoying. Whether it's rock, country, or pop, AI-generated songs are showing up in more playlists every day. The real question is not whether AI music is good enough. It's whether listeners will care that it wasn't made by a human. As technology improves and the lines between human and machine blur, that question will only get harder to answer. Does it matter who made the music, as long as it sounds good? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store