
Everyone's Getting Better At Using AI: Thoughts On Vibe Coding
As we look at the landscape around the second quarter of 2025, we see amazing developments.
You can look at what's happening in conferences and at trade shows. You can ask engineers what they're doing, or consult with a CEO.
Everywhere you look, things are changing at breakneck speed.
What's the difference between people who actually use the keyboard to write code, and others who manage people and processes at an abstract level?
Well, in the AI age, that gap is getting smaller quickly.
But there's still an emphasis on people who know how to code, and especially, people who know how to engineer. Coding is getting automated, but engineering is still a creative component of the human domain - for now.
I was listening to a recent episode of AI Daily Brief, where Nathaniel Whittemore talked to Shawn Wang, professionally known as 'Swyx,' about valuing the engineering role.
'It has always been valuable for people who are involved to keep the pulse on what builders are building,' Swyx said.
The two conceded, though, that right now, 'building' is becoming a vague term, as it's getting easier to develop project on a new code basis. You just tell AI what you want, and it builds it.
Having said that, in putting together events for the engineering community, Swyx sees the effort as vital to the industry itself.
'The people who have hands on a keyboard also need a place to gather,' he said, noting that for some of these events, attendees have to publish or otherwise prove their engineering capabilities.
Later on, in thinking about how this works logistically, the two talked about a new tool called Model Context Protocol, which lives on GitHub, and how it's being used.
MCP connects LLMs to the context that they need.
This utility involves prebuilt integrations, a client server architecture, and APIs, as well as environments like Claude Desktop.
The 'hosts' are LLMs, the 'client' provides a 1:1 server connection, and servers handle context, data and prompts. The system utilizes the transport layer for various communication events including requests, results and errors.
'You're not stuck to one model,' Swyx pointed out in illustrating how versatile these setups can be.
Noting an 'S curve' for related technology, Swyx discussed timing of innovations, invoking Moore's law.
'If you're correct, but early, you're still wrong,' he said. Mentioning how companies are 'moving away from a cost plus model to one where you deliver outcomes.'
Paraphrasing Shakespeare, he suggested that at companies like Google, execs are asking: 'To MCP, or not to MCP?'
And there's another question for implementers:
'How much of my job can you do?'
As for a timeline for MCP, Swyx cited the work of Alex Albert, also at Latent Space.
'The immediate reaction was good,' he said. 'There was a lot of immediate interest. I don't think there was a lot of immediate follow through.'
Later on, Swyx brought up the contributions of Lilian Wang, who he said defined an AI agent as 'LLM + memory + planning + tool use.'
He also laid out his own definition based on the acronym IMPACT, noting that he sees a lot of this type of work as disordered or unstructured, and that people should really ideally be able to define agent engineering well.
The 'I', he said, stands for intent and intensity, goals, and evaluations.
'M' is memory; 'P' is planning.
'A' is authority.
'Think of (the agent) as like a real estate agent,' he said, suggesting that the agent should have specialized knowledge.
'C' is control flow, and 'T' is tool use, which he said everyone can agree on.
Swyx called for a 'tight feedback loop' and processes that 'organically take traction' in enterprise.
This part of the conversation was absolutely fascinating to me as a clear eyed assessment of the different ways people use the term 'vibe coding.'
I've written about how figures like Andrej Karpathy and Riley Brown define this practice of working with AI that can craft code.
But there are two interpretations of this phrase, and they're radically different.
One that the duo mentioned is that the human programmer can get the vibe of the code and analyze it as a professional, where they need to already have some knowledge of what code is supposed to look like.
But then there's the other definition.
'Vibe coding gets taken out of context,' Swyx said.
In this latter interpretation, you don't need expertise, because you just evoke the vibe of the code and let the AI figure it out.
But this way, he said, you can get into trouble and wasted dollars.
As for best practices in vibe coding, Swyx suggested dealing with legacy code issues, having the appropriate skepticism about the limitations of vibe coding, and sampling the space
There's something here,' he said, displaying enthusiasm for the democratization of code. 'I don't know if vibe coding is the best name for it.'
In addition to all of the above, people are going to need some form of expertise, whether they are leaders, or builders, or both. Regardless of which way you view the new coding world, there's little question that reskilling for humans is going to be a piece of the puzzle. This resource from Harvard talks about tackling the challenge:
'As new technologies are integrated into organizations, with greater frequency, transforming how we work, the need for professionals to adapt and continue to learn and grow becomes more imperative.'
I agree.
All of this is quite instructive at this point in time when companies are looking for a way forward. Let's continue with this deep analysis of business today, as AI keeps taking hold throughout the rest of the year.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
25 minutes ago
- Fast Company
Can AI think? Here's what Greek philosophers might say
In my writing and rhetoric courses, students have plenty of opinions on whether AI is intelligent: how well it can assess, analyze, evaluate, and communicate information. When I ask whether artificial intelligence can 'think,' however, I often look upon a sea of blank faces. What is 'thinking,' and how is it the same or different from 'intelligence'? We might treat the two as more or less synonymous, but philosophers have marked nuances for millennia. Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what's at stake with AI today. The divided line Although the English words 'intellect' and 'thinking' do not have direct counterparts in ancient Greek, looking at ancient texts offers useful comparisons. In Republic, for example, Plato uses the analogy of a 'divided line' separating higher and lower forms of understanding. Plato, who taught in the fourth century BCE, argued that each person has an intuitive capacity to recognize the truth. He called this the highest form of understanding: 'noesis.' Noesis enables apprehension beyond reason, belief, or sensory perception. It's one form of 'knowing' something—but in Plato's view, it's also a property of the soul. Lower down, but still above his 'dividing line,' is 'dianoia,' or reason, which relies on argumentation. Below the line, his lower forms of understanding are 'pistis,' or belief, and 'eikasia,' or imagination. Pistis is belief influenced by experience and sensory perception: input that someone can critically examine and reason about. Plato defines eikasia, meanwhile, as baseless opinion rooted in false perception. In Plato's hierarchy of mental capacities, direct, intuitive understanding is at the top, and moment-to-moment physical input toward the bottom. The top of the hierarchy leads to true and absolute knowledge, while the bottom lends itself to false impressions and beliefs. But intuition, according to Plato, is part of the soul, and embodied in human form. Perceiving reality transcends the body—but still needs one. So, while Plato does not differentiate between 'intelligence' and 'thinking,' I would argue that his distinctions can help us think about AI. Without being embodied, AI may not 'think' or 'understand' the way humans do. Eikasia—the lowest form of comprehension, based on false perceptions—may be similar to AI's frequent 'hallucinations,' when it makes up information that seems plausible but is actually inaccurate. Embodied thinking Aristotle, Plato's student, sheds more light on intelligence and thinking. In On the Soul, Aristotle distinguishes 'active' from 'passive' intellect. Active intellect, which he called 'nous,' is immaterial. It makes meaning from experience, but transcends bodily perception. Passive intellect is bodily, receiving sensory impressions without reasoning. We could say that these active and passive processes, put together, constitute 'thinking.' Today, the word 'intelligence' holds a logical quality that AI's calculations may conceivably replicate. Aristotle, however, like Plato, suggests that to 'think' requires an embodied form and goes beyond reason alone. Aristotle's views on rhetoric also show that deliberation and judgment require a body, feeling, and experience. We might think of rhetoric as persuasion, but it is actually more about observation: observing and evaluating how evidence, emotion, and character shape people's thinking and decisions. Facts matter, but emotions and people move us—and it seems questionable whether AI utilizes rhetoric in this way. Finally, Aristotle's concept of 'phronesis' sheds further light on AI's capacity to think. In Nicomachean Ethics, he defines phronesis as 'practical wisdom' or 'prudence.' Phronesis involves lived experience that determines not only right thought, but also how to apply those thoughts to 'good ends,' or virtuous actions. AI may analyze large datasets to reach its conclusions, but 'phronesis' goes beyond information to consult wisdom and moral insight. 'Thinking' robots? Today, AI takes on many physical forms, from autonomous vehicles and humanoid robots to healthcare robots and drones. If intelligence requires embodiment, as Plato and Aristotle believed, it might seem as if AI is coming closer than ever to human thinking. AI does take on many forms, but they're still not bodies; they run on code, algorithms, and datasets. Intuitive understanding, emotion, integrity, and practical wisdom seem to require an embodied, perishable form that is moved by experience. Perhaps we should ask AI itself—so I did. When I prompted ChatGPT with the simple question, 'Can you think?,' it responded: 'I can process information, analyze patterns, and generate responses based on data. So, in some ways, it might 'seem' like I'm thinking. But it's not thinking like a human does. I don't have consciousness, emotions, intentions, or awareness. Everything I 'do' is based on patterns learned from huge amounts of text and a set of rules and probabilities. I can 'simulate' certain aspects of thinking, but I don't truly think or understand in the human sense.'


Forbes
26 minutes ago
- Forbes
Material Needs of Artificial Intelligence Eclipsed by Energy Debates
Artificial Intelligence technologies will require a range of neglected materials President Trump attended an important summit on Artificial Intelligence (AI) and concomitant energy infrastructure at Carnegie Mellon University in Pittsburgh on July 15th, 2025. Major investments were announced by Google and Blackstone investment to build data centers and hydroelectric dams in Pennsylvania to power the AI economy for the region. Missing from the conversations was the material needs for AI technologies that could be just as serious resource constraint to upscaling the use of these technologies. While there is much talk of critical minerals for defense and for clean energy infrastructure, the material needs of AI are not as prominently discussed. Concerns about energy intensity dominate debates about AI. Carbon emissions estimates have also been researched by Google researchers using Life Cycle Analysis techniques, but the materiality of AI infrastructure has not been well-researched. In her prominent book Atlas of AI, renowned Microsoft researcher and academic Kate Crawford documents the vast extractive needs of AI but notes that detailed analysis of materials that will be needed has been sparse. Part of the challenge is the secrecy around AI hardware material needs. At a recent colloquium on critical materials for AI hosted by Professor Alondra Nelson at the Institute for Advanced Study in Princeton New Jersey, researchers lamented that confidentiality concerns often prevented forensic accounting of material needs for AI. Where estimates are available, they are often focused on the material needs for the electricity infrastructure needed for data centers. For example, the Wall Street Journal did a story earlier this year on the copper metal needs for AI but focused on the energy infrastructure needs. The article cited JP Morgan forecasts which suggested that the copper needed for AI energy supply would require another 2.6 million tons of copper adding to the projected 4-million-ton projected metal deficit by 2030. There was a reference to a Bank of America study in the article which differentiated the material needs of the data centers themselves at around 200,000 metric tons a year compared with 500,000 tons annually for energy infrastructure. Yet these are only estimates for one metal. Gallium has gained some interest in recent years because of its high-performance potential in AI chips but systematic estimates of upscaling supply are limited. Journalists and academics alike also conflate material needs due to a lack of understanding of the chemistry of these technologies. For example, lithium is largely needed for batteries that are not necessarily an AI infrastructure issue but can get conflated in discussions on any novel technological entity. Indium and arsenic will also be needed for refined chip technologies but there are no clear estimates of projected demand. Similarly, germanium demand is expected to increase with AI infrastructure but only rough estimates of 60% growth by 2034 are available via consulting firms with no peer-reviewed research on demand growth linked to particular targets for AI penetration in various technologies. High purity alumina is another key material for AI technologies which also presents important opportunities for innovation in deriving the material from a range of existing material stocks. Australia is going to be a key provider of this material and has recently started construction of the world's largest factory for its production in Gladstone, Queensland. Quantum computing may also take on many roles of conventional AI processors in coming years. While many of the metals needed for this infrastructure may be similar, there are some notable additions of materials such as boron and ytterbium which are also specifically more well-suited for quantum computing technologies. Superconductors with a range of exotic material needs are essential for quantum computing. These materials also operate only at lower temperature ranges and hence additional infrastructure for cooling would be needed above and beyond what is already needed for data centers. As momentum builds towards international governance of AI and the findings of the U.N. Secretary General's High Level Advisory Board on Artificial Intelligence get implemented, a sharp focus on material forecasts is needed. Scenarios for mineral demand which are linked to specific upscaling targets of countries for AI infrastructure should be developed. Based on those scenarios, a prioritization of those tasks for which AI has most societal benefit should be developed. In some cases, AI could itself assist with material efficiency. In coming years research have a highly consequential area for inquiry set before them on figuring out optimal material usage profile for AI technologies and how they might transform both our physical and social reality.
Yahoo
an hour ago
- Yahoo
'New kind of frontier': Shareholder proposals on AI becoming increasingly widespread
When Canada's most valuable companies hosted their annual general meetings this year, there was a new topic for shareholders to vote on among the usual requests to appoint board members and OK their executive compensation. The proposal from Quebec-based investor rights group le mouvement d'éducation et de défense des actionnaires centred on artificial intelligence. It asked 14 companies, including Canada's biggest banks, retailer Dollarama Inc. and telecom giant BCE Inc., to sign a voluntary code of conduct the federal government developed to govern the technology. Experts say the proposal is likely just the start of what they expect to become an annual phenomenon targeting the country's biggest companies — and beyond. "This is a new kind of frontier in Canada for shareholder proposals," said Renée Loiselle, a Montreal-based partner at law firm Norton Rose Fulbright. "Last year, this was not on the ballot. Companies were not getting shareholder proposals related to AI and this year, it absolutely is." Loiselle and other corporate governance watchers attribute the increase in AI-related shareholder proposals to the recent rise of the technology itself. While AI has been around for decades, it's being adopted more because of big advances in the technology's capabilities and a race to innovate that emerged after the birth of OpenAI's ChatGPT chatbot in 2022. The increased use has revealed many dangers. Some AI systems have fabricated information and thus, mislead users. Others have sparked concerns about job losses, cyber warfare and even, the end of humanity. The opportunities and risks associated with AI haven't escaped shareholders, said Juana Lee, associate director of corporate engagement at the Shareholder Association for Research and Education (SHARE). "In Canada, I think, in the last year or two, we're seeing more and more shareholders, investors being more interested in the topic of AI," she said. "At least for SHARE ourselves, many of our clients are making it a priority to think through what ethical AI means, but also what that means for investee companies." That thinking manifested itself in a proposal two funds at the B.C. General Employees' Union targeted Thomson Reuters Corp. with. The proposal asked the tech firm to amend its AI framework to square with a set of business and human rights principles the United Nations has. It got 4.87 per cent support. Meanwhile, MÉDAC centred its proposals around Canada's voluntary code of conduct on AI. The code was launched by the federal government in September 2023 and so far, has 46 signatories, including BlackBerry, Cohere, IBM, Mastercard and Telus. Signatories promise to bake risk mitigation measures into AI tools, use adversarial testing to uncover vulnerabilities in such systems and keep track of any harms the technology causes. MÉDAC framed its proposals around the code because there's a lack of domestic legislation for them to otherwise recommend firms heed and big companies have already supported the model, director general Willie Gagnon said. Several companies it sent the proposal to already have AI policies but didn't want to sign the code. "Some of them told us that the code is mainly designed for companies developing AI, but we disagree about that because we saw a bunch of companies that signed the code that are not developing any AI," Gagnon said. Many of the banks told MÉDAC they'll soon sign the code. Only CIBC has so far. Conversations with at least five companies were fruitful enough that MÉDAC withdrew its proposals. In the nine instances where the vote went forward, the proposal didn't succeed. It garnered as much as 17.4 per cent support at TD Bank but as little as 3.68 per cent at engineering firm AtkinsRéalis Group Inc. Loiselle said you can't measure the success of a proposal based on whether it passes or not. "The goal of these shareholder proposals is more for engagement," she said. Sometimes, even just by filing a proposal, companies reveal more about their AI use or understand it's an important topic for shareholders and then, discuss it more with them. While proposals don't always succeed, Lee has seen shareholder engagement drive real change. SHARE recently had discussions with a large Canadian software company. AI was central to its business but didn't crop up in its proxy statement — a document companies file governing their annual general meetings. The firm also had no board oversight of the technology. SHARE was able to get the company, which Lee would not name, to amend its board charter to include oversight of AI and commit to more disclosure around its use of the technology in its annual sustainability report. "This is a really positive development and it's leading to improvement related to further transparency," she said. If the U.S. is anything to judge by, Lee and Loiselle agree Canadian shareholders will keep pushing companies to adhere to higher AI standards. South of the border, AI-related proposals first cropped up around two years ago. They've targeted Apple, The Walt Disney Co. and even Netflix, where a vote on disclosing AI use and adhering to ethical guidelines amassed 43.3 per cent support. The frequency and spectrum of AI-related requests shareholders have has only grown since and is likely to be mirrored in Canada, Loiselle said. "The landscape for shareholder proposals is changing and I think that change is here to stay," she said. This report by The Canadian Press was first published July 21, 2025. Tara Deschamps, The Canadian Press Sign in to access your portfolio