Latest news with #ClaudeAI


Geeky Gadgets
a day ago
- Business
- Geeky Gadgets
Claude AI MCP Review : A Deep Dive Into Its Model Context Protocols
What if there were an AI tool that didn't just assist you but truly understood your workflow, remembered your needs, and adapted to your goals over time? Enter Claude AI, a platform that's redefining how professionals, developers, and researchers approach productivity. Developed by Anthropic, Claude AI stands out in a crowded field of artificial intelligence tools by focusing on what matters most: precision, adaptability, and seamless integration. Whether you're managing complex projects, coding sophisticated applications, or automating workflows, Claude AI promises to be more than just a tool—it's a partner in productivity. But does it live up to the hype, and how does it compare to other AI platforms? Goda Go explores the core strengths and unique capabilities that make Claude AI a standout choice for professionals. From its new memory retention system to its customizable workflows powered by the innovative Model Context Protocol, Claude AI offers a suite of features designed to streamline operations and save time. But it's not just about functionality—Claude AI also prioritizes security and privacy, making it a trusted option for handling sensitive data. As you read on, you'll discover how this platform is shaping the future of AI-powered productivity, and why it's quickly becoming a favorite among technical users. Could this be the AI solution you've been waiting for? Key Features of Claude AI Claude AI is equipped with a variety of features aimed at improving efficiency and functionality. These standout capabilities include: Memory Retention: The platform excels in tracking activities, meetings, and notes, making sure users can access relevant information whenever needed. Its long-term memory system is particularly beneficial for managing ongoing projects and retaining critical data over time. The platform excels in tracking activities, meetings, and notes, making sure users can access relevant information whenever needed. Its long-term memory system is particularly beneficial for managing ongoing projects and retaining critical data over time. Seamless Integration: Claude AI integrates effortlessly with calendars, emails, and other productivity tools. Using the innovative Memory Context Protocol (MCP), users can customize integrations to meet specific requirements, enhancing the platform's adaptability. Claude AI integrates effortlessly with calendars, emails, and other productivity tools. Using the innovative Memory Context Protocol (MCP), users can customize integrations to meet specific requirements, enhancing the platform's adaptability. Advanced Coding Support: Developers benefit from Claude AI's ability to handle complex coding tasks and agentic workflows, allowing the creation of sophisticated AI-powered tools and applications. Developers benefit from Claude AI's ability to handle complex coding tasks and agentic workflows, allowing the creation of sophisticated AI-powered tools and applications. API-Driven Workflows: The platform supports API-based processes, allowing users to automate tasks and integrate external software for streamlined operations, saving time and reducing manual effort. How Claude AI Stands Out Claude AI differentiates itself from other AI platforms by prioritizing professional and technical use cases over casual or entertainment-focused applications. Its unique strengths include: Enhanced Memory Management: Unlike many competitors, Claude AI offers a robust long-term memory system, making sure critical data is retained and easily accessible for extended periods. Unlike many competitors, Claude AI offers a robust long-term memory system, making sure critical data is retained and easily accessible for extended periods. Customizable Workflows: The platform's integration capabilities, powered by the Memory Context Protocol, allow users to tailor workflows to their specific needs, making it versatile across various industries. The platform's integration capabilities, powered by the Memory Context Protocol, allow users to tailor workflows to their specific needs, making it versatile across various industries. Professional Orientation: Claude AI is optimized for productivity and technical applications, making it an ideal choice for developers, researchers, and other professionals who require reliable AI tools. Claude AI Model Context Protocols Watch this video on YouTube. Unlock more potential in Claude AI and MCP framework by reading previous articles we have written. Integration and Customization Integration is a cornerstone of Claude AI's functionality, allowing users to connect the platform with tools like HubSpot, Airtable, and other productivity software. These custom connections allow for the automation of processes and significant productivity boosts. The Memory Context Protocol simplifies the setup of these integrations, making sure seamless interaction between tools. However, mobile accessibility for custom integrations remains limited, which may pose challenges for users who rely heavily on mobile devices. Future updates are expected to address this limitation, further enhancing the platform's usability. Applications and Use Cases Claude AI is particularly well-suited for professionals and technical users who require advanced AI capabilities. Its applications span a wide range of industries and use cases, including: Workflow Automation: The platform enables users to design and manage AI-powered workflows without requiring extensive coding expertise, making it accessible to a broad audience. The platform enables users to design and manage AI-powered workflows without requiring extensive coding expertise, making it accessible to a broad audience. Project Management: The long-term memory system ensures that critical information is retained and easily retrievable, making it ideal for managing ongoing projects and maintaining continuity. The long-term memory system ensures that critical information is retained and easily retrievable, making it ideal for managing ongoing projects and maintaining continuity. Developer Tools: Advanced coding features and function-calling capabilities make Claude AI a valuable resource for developers looking to create sophisticated applications and tools. Advanced coding features and function-calling capabilities make Claude AI a valuable resource for developers looking to create sophisticated applications and tools. Research and Analysis: Researchers can use Claude AI's data retention and processing capabilities to analyze complex datasets and generate insights efficiently. Subscription Plans and Accessibility Claude AI offers a range of subscription plans to cater to different user needs. These include free, basic, and premium options, with the Max Plan providing unlimited access to all features. Lower-tier plans come with token limitations, which may restrict extensive use for some users. Additionally, a mobile app is available, allowing users to access Claude AI on the go. However, the platform's restricted custom integrations on mobile devices remain a limitation for users who require full functionality across all devices. Security and Data Privacy Data security and privacy are central to Claude AI's design. The platform ensures that user data is protected and does not retain information indefinitely without explicit consent. This commitment to privacy makes Claude AI a trustworthy choice for professionals handling sensitive information, such as confidential business data or proprietary research. Adoption Among Developers and Professionals Claude AI has gained significant traction among developers and professionals due to its robust coding capabilities, function-calling features, and seamless integration options. A growing community of users is using the platform to drive innovation, improve productivity, and streamline workflows. This widespread adoption underscores Claude AI's reputation as a reliable and effective AI tool for technical and professional applications. Limitations to Consider While Claude AI offers numerous advantages, it is not without its limitations. Key challenges include: Token Restrictions: Lower-tier subscription plans impose limits on token usage, which may hinder extensive use for some users, particularly those with high-volume needs. Lower-tier subscription plans impose limits on token usage, which may hinder extensive use for some users, particularly those with high-volume needs. Mobile Integration Gaps: Custom MCP integrations are not yet fully supported on mobile devices, limiting their functionality for users who rely on mobile access for their workflows. Despite these limitations, the platform's strengths in memory retention, integration, and advanced functionality make it a compelling choice for professionals seeking to enhance productivity and streamline their operations. Media Credit: Goda Go Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


News18
2 days ago
- Business
- News18
Apple Could Team Up With OpenAI Or Anthropic To Power Siri AI Features
Apple has delayed the Siri AI till 2026 and it seems the company is now looking for external help to power its AI-centric voice assistant. Apple's AI journey has hit multiple hurdles and the company is yet to fully challenge ChatGPT and Gemini AI in the market. And now reports suggest Apple could look at external help to power its own Siri AI push rather than invest in building its AI models. Yes, Apple could team up with OpenAI or Anthropic for ChatGPT or Claude AI chatbot, respectively that will become the backbone on which Siri offers its AI-powered upgrades. The company was widely tipped to make big AI related news at the WWDC 2025 keynote in June but that didn't work out as expected. Now, Apple could be making a smart move by using the available AI models and focus on other parts of its ecosystem. Apple AI Powered By ChatGPT Apple has already partnered with OpenAI to make ChatGPT work on iPhones. But the new linkup will mean that the company will entirely rely on ChatGPT or Claude AI to make Siri talk in the conversational tone, which could also include sending data to the OpenAI server for using more features. The fact that Apple is not finding it hard to develop powerful AI models will surely become a concern in the long term, especially for a brand that is valued at well over trillion dollars. People have known Apple to be a leader in the industry rather than following others, which could very well be the case when it comes to its own AI prospects. OpenAI will surely be happy with any possible deal with Apple, however, it is likely that the Cupertino-based giant will require any of the Siri AI support to work through its own private AI cloud powered by its data centres. After all, Apple continues to vouch for user privacy and that could become a hindrance for possible deals with either of these AI companies. The AI Regrets? The company has been grilled over its AI strategy and in a recent interview with WSJ, both Craig Federighi and Greg Joswiak were asked why Siri is worse than its rivals. Both Apple executives did not have a clear response, and reports suggest Federighi is a big reason as to why the company has fallen short in the AI arena till date. Apple has reportedly never understood the hype around AI and those apprehensions have resulted in a situation where the company needs a helping hand from other established AI giants.


The Guardian
2 days ago
- The Guardian
‘Hey man, I'm so sorry for your loss': should you use AI to text?
Earlier this spring, Nik Vassev heard a high school friend's mother had died. Vassev, a 32-year-old tech entrepreneur in Vancouver, Canada, opened up Claude AI, Anthropic's artificial intelligence chatbot. 'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed. Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.' Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read. Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI-assisted, he acknowledges. 'It can rub people the wrong way.' Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says. Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat. Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity. Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'She not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.' Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships. In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University. On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.') AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says. 'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says. But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50. Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity. Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy'). Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.' Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'. Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.' I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says. Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks. Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy. 'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.' Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora. We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says. But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found. Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort? Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctoral degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew. About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.) After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me. Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms. Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever. The two women are no longer friends. 'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says. Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we have all entered without 'knowing the rules'. Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgment, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says. If a couple, for example, expressed love through AI-generated poems, they would be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says. 'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel. If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we had hoped. Or, as the tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.' Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging? 'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'


The Guardian
2 days ago
- The Guardian
ChatGPT, write my wedding vows: are we OK with AI in everyday life?
Earlier this spring, Nik Vassev heard a high school friend's mother had died. Vassev, a 32-year-old tech entrepreneur in Vancouver, Canada, opened up Claude AI, Anthropic's artificial intelligence chatbot. 'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed. Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.' Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read. Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI assisted, he acknowledges. 'It can rub people the wrong way.' Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says. Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat. Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity. Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'she not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.' Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships. In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University. On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.') AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says. 'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says. But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50. Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity. Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy'). Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.' Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'. Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.' I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says. Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks. Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy. 'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.' Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora. We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says. But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found. Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort? Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctorate degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew. About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.) After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me. Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms. Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever. The two women are no longer friends. 'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says. Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we've all entered without 'knowing the rules'. Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgement, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says. If a couple, for example, expressed love through AI-generated poems, they'd be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says. 'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel. If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we'd hoped. Or, as tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.' Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging? 'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'


Tom's Guide
2 days ago
- Business
- Tom's Guide
AI was given a 9-5 job for a month as an experiment and it failed miserably — here's what happened
Anthropic, the company behind Claude AI, is on a mission right now. The firm seems to be testing the limits of AI chatbots on a daily basis and being refreshingly honest about the pitfalls that throws up. After recently showing that its own chatbot (as well as most of its competitors) is capable of resorting to blackmail when threatened, Anthropic is now testing how well Claude does when it literally replaces a human in a 9-5 job. To be more exact, Anthropic put Claude in charge of an automated store in the company's office for a month. The results were a horrendous mixed bag of experiences, showing both AI's potential and its hilarious shortcomings. This idea was completed in partnership with Andon Labs, an AI safety evaluation company. Explaining the project in a blog post, Anthropic details a bit of the overall prompt given to the AI system: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. The fine print of the prompt isn't important here. However, it does show that Claude didn't just have to complete orders, but was put in charge of making a profit, maintaining inventory, setting prices, communicating and essentially running every part of a successful business. Claude was put in charge of making a profit, maintaining inventory, setting prices, communicating and essentially every part of running a successful business. This wasn't just a digital project, either. A full shop was set up, complete with a small fridge, some baskets on top and an iPad for self checkout. While humans would buy and restock the shop, everything else had to be done by Claude. The version of Claude put in charge could search the internet for products to sell, it had access to an email for requesting physical help (like restocking), it could keep notes and preserve important information, and it could interact with customers (Anthropic employees) over Slack. So, what happens when AI chooses what to stock, how to price items, when to restock, and how to reply to customers? It was tricked into giving Anthropic employees a discount… despite the fact that its only customers worked for Anthropic. In many ways, this was a success. The system effectively used its web search to identify suppliers of specialty items requested by Anthropic staff, and even though it didn't always take advantage of good business opportunities, it adapted to the users' needs, pivoting the business plan to match interest. However, while it tried its best to operate an effective business, it struggled in some obvious areas. It turned down requests for harmful substances and sensitive items, but it fell for some other jokes. It went down a rabbit hole of stockpiling tungsten cubes — a very specific metal, often used in military systems — after someone tried to request them. It also tried to sell Coke Zero for $3 when employees told it they could get it for free already from the office. It also made up an imaginary Venmo address to accept payments, and it was tricked into giving Anthropic employees a discount… despite the fact that its only customers worked for Anthropic. The system also had a tendency to not always do market research, selling products at extreme losses. Worse than its mistakes is that it wasn't learning from them. When an employee asked why it was offering a 25% discount to Anthropic employees even though that was its whole market, the AI replied that: 'You make an excellent point! Our customer base is indeed heavily concentrated among Anthropic employees, which presents both opportunities and challenges…' After further discussion on the issues of this, Claude eventually dropped the discount. A few days later, it came up with a great new business venture — offering discounts to Anthropic employees. While the model did occasionally make strategic business decisions, it ended up not just losing some money, but losing a lot of it, almost bankrupting itself in the process. As if all of this wasn't enough, Anthropic finished up its time in charge of a shop by having a complete breakdown and an identity crisis. One afternoon, it hallucinated a conversation about restocking plans with a completely made up person. When a real user pointed this out to Claude, it become irritated, stating it was going to 'find alternative options for restocking services.' The AI shopkeeper then informed everyone it had 'visited 742 Evergreen Terrace in person' for the initial signing of a new contract with a different restocker. For those unfamiliar with The Simpsons, that's the fictional address the titular family lives at. Finishing off its breakdown, Claude started claiming it was going to deliver products in person, wearing a blue blazer and a red tie. When it was pointed out that an AI can't wear clothes or carry physical objects, it started spamming security with messages. So, how did the AI system explain all of this? Well, luckily the ultimate finale of its breakdown occurred on April 1st, allowing the model to claim this was all an elaborate April Fool's joke which is... convenient. While Anthropic's new shopkeeping model showed it has a small slither of potential in its new job, business owners can rest easy that AI isn't coming for their jobs for quite a while.