logo
#

Latest news with #Anthropic

New study reveals how many people are using AI for companionship — and the results are surprising
New study reveals how many people are using AI for companionship — and the results are surprising

Tom's Guide

time3 hours ago

  • Tom's Guide

New study reveals how many people are using AI for companionship — and the results are surprising

As AI has gotten smarter and more conversational, many would have you believe that people are turning en masse to chatbots for relationships, therapy and friendship. However, that doesn't appear to be the case. In a new report from Anthropic, the makers of Claude AI, they've revealed some key information on how people are using chatbots. Analyzing 4.5 million conversations, the AI company painted a picture of how people use them. Anthropic claims these conversations were fed through a system that inputs multiple layers of anonymity to avoid breaks in privacy. While the research produces a long list of findings, the key thing to note is that just 2.9% of Claude AI interactions are emotive conversations. Companionship and roleplay relationships made up just 0.5%. Anthropic found that, for the vast majority of people, their AI tool was mainly used for work tasks and content creation. Of those seeking affection-based conversations, 1.13% used it for coaching, and only 0.05% used it for romantic conversations. This aligns with similar results to ChatGPT. A study by OpenAI and MIT found that a limited number of people use AI chatbots for any kind of emotional engagement. Just like with Anthropic, the vast majority of people on ChatGPT use it for work or content creation. Even in low numbers, there is a fierce debate over whether AI should be used for these roles. 'The emotional impacts of AI can be positive: having a highly intelligent, understanding assistant in your pocket can improve your mood and life in all sorts of ways,' Anthropic states in their research blog post. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. 'But AIs have in some cases demonstrated troubling behaviors, like encouraging unhealthy attachment, violating personal boundaries, and enabling delusional thinking.' They are quick to point out that Claude isn't designed for emotional support and connection, but that they wanted to analyse its ability to perform this task anyway. In the analysis, Anthropic found that those who did use it typically dealt with deeper issues like mental health and loneliness. Others used it for coaching, aiming to better themselves in different skills or personality aspects. The report offers a balanced assessment of the situation, showing that there can be success in this area, but also detailing the risks, especially where Claude AI rarely steps in and offers endless encouragement — a point that Anthropic acknowledges as a risky topic to address.

Meta Mulls Swapping Out Llama
Meta Mulls Swapping Out Llama

Yahoo

time5 hours ago

  • Business
  • Yahoo

Meta Mulls Swapping Out Llama

Meta's (NASDAQ:META) top brass, including Mark Zuckerberg, are quietly debating whether to dial back its open-source Llama models and lean on closed systems from OpenAI or Anthropic instead. Over the past week, insiders told The New York Times that Meta execs have discussed de-investing in Llama after the April reveal underwhelmed some developers and benchmarks lagged behind rival releases. Warning! GuruFocus has detected 6 Warning Sign with META. Even so, a Meta spokeswoman insists there'll be multiple additional Llama releases this year. Behind the scenes, Meta is splashing out on AI talent and techincluding a $14.3 billion deal for half of Scale AI and seven-figure sign-on offers for OpenAI researchers like Trapit Bansal. Switching from open-source to closed models could speed up Meta's time-to-market and tap best-in-class performance, but it also risks alienating the community that built Llama's early momentum. This article first appeared on GuruFocus.

Sector Spotlight: Instagram, TikTok coming to a TV screen near you
Sector Spotlight: Instagram, TikTok coming to a TV screen near you

Business Insider

time6 hours ago

  • Business
  • Business Insider

Sector Spotlight: Instagram, TikTok coming to a TV screen near you

Welcome to the latest edition of 'Sector Spotlight,' where The Fly looks at a new industry every week and highlights its happenings. Confident Investing Starts Here: TECH SECTOR NEWS: Germany's data protection commissioner, Meike Kamp, has asked Apple (AAPL) and Google (GOOGL) to remove Chinese AI startup DeepSeek from their app stores in the country due to concerns about data protection, Reuters reported. The two U.S. companies must now review the request promptly and decide whether to block the app in Germany, she said in a statement on Friday, according to the report. According to EU's competitive chief Teresa Ribera, the European Union's crackdown on Apple, Meta (META), and Google (GOOGL) is not a bargaining chip in negotiations with the U.S. President Donald Trump, Samuel Stolton and Oliver Crook of Bloomberg wrote. In an interview, Ribera rejected suggestions that the enforcement of the Digital Markets Act, DMA, may be sacrificed to dodge punitive EU tariffs pitched by the White House. 'Of course not,' Ribera said on Bloomberg TV. 'We do not challenge the United States on how they implement their rules or how they adopt regulations. We deserve respect in the same way.' Meta's Instagram and TikTok are working on versions of their apps customized to run on TV screens, following YouTube's success in attracting a TV audience, The Information's Kaya Yurieff and Kalley Huang reported. A group of authors have filed a lawsuit against Microsoft in a New York federal court, claiming the company used nearly 200,000 pirated books without permission to train its Megatron AI model, Reuters' Blake Brittain wrote. Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. The complaint against Microsoft came a day after a California federal judge ruled that Anthropic made fair use under U.S. copyright law of authors' material to train its AI systems but may still be liable for pirating their books. Earlier in the week, a federal judge has found Anthropic's use of books to train its AI models was legal in some circumstances, but not others, Meg Tanaka of The Wall Street Journal reported. Judge William Alsup of the Northern District of California ruled Anthropic's use of copyrighted books for AI model training was legal under U.S. copyright law if it had purchased those books. The ruling does not apply to the more than 7M books the company obtained through 'pirated' means. Anthropic is backed by Amazon (AMZN) and Google. Vasi Philomin, Amazon Web Services' VP overseeing generative AI development, told Reuters in an email that he has left the e-commerce giant for another company, without providing details. Meta CEO Mark Zuckerberg has hired three AI researchers from Microsoft-backed (MSFT) OpenAI to help with his superintelligence efforts, the Wall Street Journal's Meghan Bobrowsky wrote. The social media giant poached Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai from OpenAI's Zurich office. The three staff established the Zurich office late last year. OpenAI and Microsoft are in contract negotiations that hinge on when OpenAI's systems will reach artificial general intelligence, The Wall Street Journal's Berber Jin reported. The contract stipulates that OpenAI can limit Microsoft's access to its tech when its systems reach AGI, which Microsoft is fighting. Microsoft hopes to remove the AGI clause or secure exclusive access to OpenAI's IP even after AGI is declared, according to the report. OpenAI CEO Sam Altman had a 'super nice' call with Microsoft CEO Satya Nadella on Monday and discussed their future working partnership, Altman said this week in a New York Times podcast. 'Obviously in any deep partnership, there are points of tension and we certainly have those,' Altman said. 'But on the whole, it's been like really wonderfully good for both companies.' Cloud computing currently generates large profits for Amazon (AMZN), Microsoft (MSFT), and Google (GOOGL), but this now faces a threat with the rise of AI cloud specialists and Nvidia, a new industry power broker, Asa Fitch of The Wall Street Journal wrote. Nvidia launched its own cloud-computing services two years ago and has nurtured upstarts competing with big cloud companies, investing in CoreWeave (CRWV) and Lambda. Amazon plans to invest GBP 40B in the UK over the next three years. Amazon said via LinkedIn: 'This investment builds on Amazon's 27-year history in the UK, where we've grown to employ over 75,000 people across over 100 sites, reaching every region of the country. This historic investment will create thousands of full-time jobs, including 2,000 jobs at the previously announced state-of-the-art fulfillment center in Hull, 2,000 jobs at another in Northampton, and additional positions at new sites in the East Midlands and at delivery stations across the country.' OpenAI has quietly designed a rival to compete with Microsoft Office and Google Workspace, with features that allow people to collaborate on documents and communicate via chat in ChatGPT, The Information's Amir Efrati and Natasha Mascarenhas reported, citing two people who have seen the designs. Launching these features would allow OpenAI to compete more directly against Microsoft, its biggest investor and business partner, the report notes. Starting June 24, a limited number of Waymo autonomous vehicles will gradually become available on the Uber (UBER) app for riders in select areas of Atlanta, Georgia, the company announced in a blog post. The Competition and Markets Authority is proposing to designate Google with 'strategic market status' in general search and search advertising. The CMA will consult on the proposal ahead of a final decision in October. If designated, the CMA would be able to introduce targeted measures to address specific aspects of how Google operates search services in the UK. The CMA has also published a roadmap of potential actions it could prioritize were Google to be designated. Early priorities include: requiring choice screens for users to access different search providers; ensuring fair ranking principles for businesses appearing on Google search; more transparency and control for publishers whose content appears in search results; and portability of consumer search data to support innovation in new products and services. Google search accounts for more than 90% of all general search queries in the UK, the CMA said. CMA CEO Sarah Cardell said: 'These targeted and proportionate actions would give UK businesses and consumers more choice and control over how they interact with Google's search services – as well as unlocking greater opportunities for innovation across the UK tech sector and broader economy.' The CMA welcomes views on its proposed designation decision and accompanying roadmap. A final decision on SMS designation will be made by the deadline of October 13. Apple is in last-minute talks with EU regulators over making changes to its App Store to avoid a series of escalating EU fines due to come into effect this week, The Financial Times' Barbara Moens wrote. People involved in the negotiations say Apple is expected to offer concessions on its 'steering' provisions that stop users accessing offers outside the App Store. Regulators had ordered the company to revise its rules within two months of its initial EUR 500M fine, and people with knowledge of the talks say Apple is expected to announce some concessions that buy the company more time, as the commission would first assess those changes before making a final decision. Discussions have also involved Apple's 'Core Technology Fee,' which requires developers to pay for each annual install after 1M downloads.

Yes, You May Lose Your Job To AI. So What Will You Do About It?
Yes, You May Lose Your Job To AI. So What Will You Do About It?

Forbes

time7 hours ago

  • Business
  • Forbes

Yes, You May Lose Your Job To AI. So What Will You Do About It?

The Silicon Valley gospel has been preached from every conference stage: "You won't lose your job to AI, but to someone who learns to use AI." It's a comforting narrative that keeps executives sleeping soundly while their HR departments frantically roll out "AI literacy" programs. But this conventional wisdom misses the fundamental transformation happening right under our noses. Putting the notion of "AI won't take your job" into context. The real disruption isn't about individual workers becoming AI-savvy. It's about the obsolescence of entire job categories as AI becomes exponentially more capable, efficient, and effective at core business functions. The Exponential Reality Check Here's what should terrify you: Today is the worst AI will ever be. Several current AI models have surpassed the average IQ of humans. Most people have an average IQ between 85 and 115. Overall, about 98% of people have a score below 130, with the 2% above that are considered 'very superior'. With the latest versions of OpenAI's o3, Anthropic's Claude 4 Sonnet, and Google's Gemini 2.0 Flash Thinking Experimental all well above average human intelligence scores, we now have genius-level AI. Ranking The Smartest AI Models by IQ Level But that's just the beginning. AI capability is roughly doubling every six months. Let's do the math and be a bit more conservative and assume it doubles every year: I share this just for illustrative purposes. Humans typically think in a linear fashion. It's hard for us to think exponentially. And it's hard to not think this is all 'science fiction' and will happen years in the future. No. This is happening now. This is compound growth in action. When AI can already write better marketing copy than most marketers, analyze data faster than any analyst, and code more efficiently than many developers TODAY - imagine what happens when it's 32 times more capable in just five years. The Job Container Is Breaking For decades, companies have organized human effort into neat packages called "jobs"—performing tasks in predefined roles with specific responsibilities, reporting structures, and compensation bands. This industrial-age framework worked when work was predictable, hierarchical, and required sustained human attention. AI is obliterating this model. When artificial intelligence can write code in minutes, draft legal briefs in seconds, and generate marketing campaigns instantly, traditional job boundaries become arbitrary constraints. Why maintain a "Marketing Manager" role when AI can execute campaigns while a strategic thinker provides direction? Why preserve "Financial Analyst" positions when AI can process datasets that would take humans months to review? Think this is hype? Check out the latest release that dropped this week from HeyGen, an AI-powered video creation platform that allows users to generate videos with AI avatars, text-to-speech, and customizable templates. They just announced this week the HeyGen Video Agent, the first prompt-native creative engine designed to transform a single idea into a complete, publish-ready video asset. Whether that's a TikTok ad, a YouTube hook, a product explainer, or a quickfire UGC clip, the video is created by AI in seconds. The tool looks amazing, and it will surely be compelling to marketers at small companies to large global enterprises. However, this is another nail in the coffin for a creative agency or production company that was providing those services. We're entering the era of agentic content creation where intelligent systems don't just assist with ... More editing, but act on your behalf to create high-quality videos, end to end. Reading Between The Lines: The Amazon Example The companies that survive won't be those that train existing jobholders to use AI tools. They'll be the ones that completely reimagine how work gets done. On June 17th, Amazon CEO Andy Jassy published a memo with "Some thoughts on Generative AI." While highlighting AI's incredible applications across the company, he dropped this bombshell: 'As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.' This should be a wake-up call. It's a direct admission that AI will eliminate jobs, and it should terrify anyone doing repetitive or process-driven work. Amazon is just the latest in a number of companies that are signaling what is to come slowly, carefully, and publicly. Sure for now it's mostly tech companies like Duolingo, Klarna, and Shopify that are talking about being 'AI first'. In the case of Shopify, CEO Tobi Lütke told employees that teams must demonstrate why AI cannot fulfill a role before requesting to hire a human. This effectively positions AI as the default option for many tasks. This AI first approach might start in tech, but it won't end there. The Rise of Liquid Labor Forward-thinking organizations are moving beyond "jobs" toward what I call 'Liquid Labor'. In a hybrid human-AI workforce, Liquid Labor is the fluid combinations of human creativity, AI capabilities, and automated processes that adapt in real-time to business needs. Consider Netflix. They don't have traditional "TV Programming Executive" jobs. Instead, they have data scientists, content strategists, and algorithm specialists working in fluid teams that constantly reconfigure based on viewer behavior and market opportunities. This shift challenges everything: OK, this is scary. So What Should I Do? The obvious answer is to upskill yourself and learn how to use all of these AI tools. Are you using not just one model (say ChatGPT), but experiment with multiple models from Claude to Perplexity to CoPilot to Gemini to Grok. They all have their strengths and weaknesses, so learn what works best for you. This is table-stakes, though. Here's a more candid survival guide: Double down on uniquely human capabilities that AI can't replicate (yet): Develop higher-order thinking abilities that allow you to adapt and learn faster than AI can optimize for your replacement, focusing on skills that help you navigate complexity and change rather than specific technical competencies: Become the critical bridge between AI systems and human needs: Forge a unique professional identity: Urgently diversify your income streams and build wealth-generating assets while you still have earning power: Cultivate deep, value-creating relationships as your unique network of human connections becomes one of your most defensible and irreplaceable assets in the AI era: The Time to Act Is Now Most people think they have years to adapt. They're wrong. By the time AI visibly threatens your job, it's already too late. The exponential curve means: The window for repositioning yourself is now, while you still have leverage, income, and options. The transformation from jobs to liquid labor is already here. The choice isn't between learning AI or losing your job. It's between fundamentally reimagining your career or watching it become obsolete. Those who act now - building unique capabilities, creating new value propositions, and positioning themselves at the human-AI interface - won't just survive. They'll thrive in ways we can't yet imagine. Those who wait, believing that disruption is still years away, will discover that no amount of prompt engineering can compete with exponentially improving AI that works 24/7, never gets sick, and improves while you sleep. The question isn't whether this transformation will affect you, but whether you'll adapt fast enough. Now is the time to get AI ready.

In pursuit of Godlike technology, Mark Zuckerberg amps up the AI race
In pursuit of Godlike technology, Mark Zuckerberg amps up the AI race

Miami Herald

time9 hours ago

  • Business
  • Miami Herald

In pursuit of Godlike technology, Mark Zuckerberg amps up the AI race

SAN FRANCISCO -- In April, Mark Zuckerberg's lofty plans for the future of artificial intelligence crashed into reality. Weeks earlier, the 41-year-old CEO of Meta had publicly boasted that his company's new AI model, which would power the latest chatbots and other cutting-edge experiments, would be a 'beast.' Internally, Zuckerberg told employees that he wanted it to rival the AI systems of competitors like OpenAI and be able to drive features such as voice-powered chatbots, people who spoke with him said. But at Meta's AI conference that month, the new AI model did not perform as well as those of rivals. Features like voice interactions were not ready. Many developers, who attended the event with high expectations, left underwhelmed. Zuckerberg knew Meta was falling behind in AI, people close to him said, which was unacceptable. He began strategizing in a WhatsApp group with top executives, including Chris Cox, Meta's head of product, and Andrew Bosworth, the chief technology officer, about what to do. That kicked off a frenzy of activity that has reverberated across Silicon Valley. Zuckerberg demoted Meta's vice president in charge of generative AI. He then invested $14.3 billion in the startup Scale AI and hired Alexandr Wang, its 28-year-old founder. Meta approached other startups, including the AI search engine Perplexity, about deals. And Zuckerberg and his colleagues have embarked on a hiring binge, including reaching out this month to more than 45 AI researchers at rival OpenAI alone. Some received formal offers, with at least one as high as $100 million, two people with knowledge of the matter said. At least four OpenAI researchers have accepted Meta's offers. In another extraordinary move, executives in Meta's AI division discussed 'de-investing' in its AI model, Llama, two people familiar with the discussions said. Llama is an 'open source' model, with its underlying technology publicly shared for others to build on. They discussed embracing AI models from competitors like OpenAI and Anthropic, which have 'closed' code bases. A Meta spokesperson said company officials 'remain fully committed to developing Llama and plan to have multiple additional releases this year alone.' Zuckerberg has ramped up his activity to keep Meta competitive in a wildly ambitious race that has erupted within the broader AI contest. He is chasing a hypothetically godlike technology called 'superintelligence,' which is AI that would be more powerful than the human brain. Only a few Silicon Valley companies -- OpenAI, Anthropic and Google -- are considered to have the know-how to develop this, and Zuckerberg wants to ensure that Meta is included, people close to him said. 'He is like a lot of CEOs at big tech companies who are telling themselves that AI is going to be the biggest thing they have seen in their lifetime, and if they don't figure out how to become a big player in it, they are going to be left behind,' said Matt Murphy, a partner at the venture capital firm Menlo Ventures. He added, 'It is worth anything to prevent that.' Leaders at other tech behemoths are also going to extremes to capture future innovation that they believe will be worth trillions of dollars. Google, Microsoft and Amazon have supersized their AI investments to keep up with one another. And the war for talent has exploded, vaulting AI specialists into the same compensation stratosphere as NBA stars. Google's CEO, Sundar Pichai, and his top AI lieutenant, Demis Hassabis, as well as the chief executives of Microsoft and OpenAI, Satya Nadella and Sam Altman, are personally involved in recruiting researchers, two people with knowledge of the approaches said. Some tech companies are offering multimillion-dollar packages to AI technologists over email without a single interview. 'The market is setting a rate here for a level of talent which is really incredible, and kind of unprecedented in my 20-year career as a technology executive,' Meta's Bosworth said in a CNBC interview last week. He said Altman had made counteroffers to some of the people Meta had tried to hire. OpenAI and Google declined to comment. Some details of Meta's efforts were previously reported by Bloomberg and The Information. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) For years, Meta appeared to keep pace in the AI race. More than a decade ago, Zuckerberg hired Yann LeCun, who is considered a pioneer of modern AI. LeCun co-founded FAIR -- or Fundamental AI Research -- which became Meta's artificial intelligence research arm. After OpenAI released its ChatGPT chatbot in 2022, Meta responded the next year by creating a generative AI team under one of its executives, Ahmad Al-Dahle, to spread the technology throughout the company's products. Meta also open-sourced its AI models, sharing the underlying computer code with others to entrench its technology and spread AI development. But as OpenAI and Google built AI chatbots that could listen, look and talk, and rolled out AI systems designed to 'reason,' Meta struggled to do the same. One reason was that the company had less experience with a technique called 'reinforcement learning,' which others were using to build AI. Late last year, the Chinese startup DeepSeek released AI models that were built upon Llama but were more advanced and required fewer resources to create. Meta's open-source strategy, once seen as a competitive advantage, appeared to have let others get a leg up on it. Zuckerberg knew he needed to act. Around that time, outside AI researchers began receiving emails from him, asking if they would be interested in joining Meta, two people familiar with the outreach said. In April, Meta released two new versions of Llama, asserting that the models performed as well as or better than comparable ones from OpenAI and Google. To prove its claim, Meta cited its own testing benchmarks. On Instagram, Zuckerberg championed the releases in a video selfie. But some independent researchers quickly deduced that Meta's benchmarks were designed to make one of its models look more advanced than it was. They became incensed. Zuckerberg later learned that his AI team had wanted the models to appear to perform well, even though they were not doing as well as hoped, people with knowledge of the matter said. Zuckerberg was not briefed on the customized tests and was upset, two people said. His solution was to throw more bodies at the problem. Meta's AI division swelled to more than 1,000 people this year, up from a few hundred two years earlier. The rapid growth led to infighting and management squabbles. And with Zuckerberg's round-the-clock, hard-charging management style -- his attention on a project is often compared to the 'Eye of Sauron' internally, a reference to the 'Lord of the Rings' villain -- some engineers burned out and left. Executives hunkered down to brainstorm next steps, including potentially ratcheting back investment in Llama. In May, Zuckerberg sidelined Al-Dahle and ramped up recruitment of top AI researchers to lead a superintelligence lab. Armed with his checkbook, Zuckerberg sent more emails and text messages to prospective candidates, asking them to meet at Meta's headquarters in Menlo Park, California. Zuckerberg often takes recruitment meetings in an enclosed glass conference room, informally known as 'the aquarium.' The outreach included talking to Perplexity about an acquisition, two people familiar with the talks said. No deal has materialized. Zuckerberg also spoke with Ilya Sutskever, OpenAI's former chief scientist and a renowned AI researcher, about potentially joining Meta, two people familiar with the approach said. Sutskever, who runs the startup Safe Superintelligence, declined the overture. He did not respond to a request for comment. But Zuckerberg won over Wang of Scale, which works with data to train AI systems. They had met through friends and are also connected through Elliot Schrage, a former Meta executive who is an investor in Scale and adviser to Wang. This month, Meta announced that it would take a minority stake in Scale and bring on Wang -- who is not known for having deep technical expertise but has many contacts in AI circles -- as well as several of his top executives to help run the superintelligence lab. Meta is now in talks with Safe Superintelligence's CEO, Daniel Gross, and his investment partner Nat Friedman to join, a person with knowledge of the talks said. They did not respond to requests for comment. Meta has its work cut out for it. Some AI researchers have said Zuckerberg has not clearly laid out his AI mission outside of trying to optimize digital advertising. Others said Meta was not the right place to build the next AI superpower. Whether or not Zuckerberg succeeds, insiders said the playing field for technological talent had permanently changed. 'In Silicon Valley, you hear a lot of talk about the 10x engineer,' said Amjad Masad, the CEO of the AI startup Replit, using a term for extremely productive developers. 'Think of some of these AI researchers as 1,000x engineers. If you can add one person who can change the trajectory of your entire company, it's worth it.' This article originally appeared in The New York Times. Copyright 2025

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store