logo
#

Latest news with #CharacterAI

Meta owner Mark Zuckerberg spends Rs 1170000000000 to hire this man, his name is.., his expertise is to...
Meta owner Mark Zuckerberg spends Rs 1170000000000 to hire this man, his name is.., his expertise is to...

India.com

time15 hours ago

  • Business
  • India.com

Meta owner Mark Zuckerberg spends Rs 1170000000000 to hire this man, his name is.., his expertise is to...

Mark Zuckerberg is making major moves to establish Meta's role in the fast-changing world of artificial intelligence (AI). In a big deal, he has partnered with a major player in the AI world who could help push Meta's AI aspirations further. To finalize this arrangement, it is reported that Zuckerberg invested a whopping $14 billion. According to the media reports that Meta has acquired a company named Scale AI. However, the truth is that Meta simply made a large investment in Scale AI-it didn't acquire the company. If it was an acquisition, then Meta would have had to buy all shares of Scale, and all employees would have had to either receive Meta stock or cash out in some manner. This did not happen. Instead, Meta invested $14 billion in Scale AI, which raised the valuation of Scale to $29 billion, and then made Meta's stake nearly 49%. Scale is still a separate company, and its board was unchanged. However, where there is this level of influence, it is very likely that the company is going to fall directly in line with Mark Zuckerberg's vision. Alexandr Wang is the founder and CEO of Scale AI, and he was critical to making this deal happen. Wang is joining Meta but will remain on the board of Scale. With Meta and Wang's combined stake, they have potential control of Scale AI now. To put it another way, for the foreseeable future, key decisions for Scale AI could practically be dictated by Meta. The deal was so large that some thought of Meta as buying the company outright. In fact, a significant amount of the capital ultimately went to Scale AI's employees because they were able to cash out their shares-partially, of course-but retain, and cash in, a percentage of their ownership. This allowed them to profit immediately while also staying invested in the company's future growth. It's said that this idea came from Alexandr Wang himself, ensuring that his team benefitted alongside him and didn't get left behind. The most interesting thing about this acquisition is that it seems like Meta is not truly interested in Scale AI's core businesses. Scale AI is primarily a data labeler—providing the prep work for training machine learning models, which is usually a human-intensive task. It is also a low-tech task and therefore low in innovation. Scale works a lot with big clients such as Toyota, General Motors, and various governments, who want to adopt AI, except have no idea how to build AI. For Meta, a tech of its size, Scale's business does not seem to quite fit either. Meta is not building a B2B data service business, and Scale's datasets are not valuable enough as datasets to warrant a deal on that level. The real purpose for the deal, it seems, Meta wanting to acquire Alexandr Wang, the CEO behind Scale AI. This is not unprecedented. Google invested in Character AI and lured some of their best employees onto their Gemini team. Microsoft did something similar with Inflection AI. So why is Alexandr Wang so significant? In the modern tech race, the player that builds the strongest large language models (LLMs) will win the game. It is a race to claim market territory. There remain many who claim they can build LLMs, but success is impossible without the right data, enormous compute, and the ability to scale. Users will always go with the highest-performing model. When it comes to this game, second best doesn't matter. Meta has not kept pace in the AI race to date. OpenAI has already claimed the consumer software market with ChatGPT, and Google and Anthropic are established developer players. Meta has models made like Llama 2, but they have not been able to put the flag in the ground claiming 'first' in what is becoming a heated market. To this point, Meta's play has been to keep it open-source, and this was enough to gather a broad audience of developers and researchers. Now, Meta understands open source can take them only so far. They need a visionary leader capable of defining their AI future; in this case, Alexander Wang is expected to be that leader. Meta is falling considerably behind in the AI race. OpenAI has taken the consumer space using ChatGPT, and Google and Anthropic have taken the developer space. While Meta has developed some models like Llama 2, its unable to stake a claim to the top of the competitive landscape. Meta's approach thus far has been to keep everything open-sourced, and that did help garner a large community of developers and researchers,. Nevertheless, the company now realizes it cannot simply rely on open source. They need a lossy visionary leader to mold their AI future, which is why Alex Wang is in the limelight.

Why Mark Zuckerberg Spent Rs 14 Billion To Get Alexander Wang To Meta
Why Mark Zuckerberg Spent Rs 14 Billion To Get Alexander Wang To Meta

News18

timea day ago

  • Business
  • News18

Why Mark Zuckerberg Spent Rs 14 Billion To Get Alexander Wang To Meta

Last Updated: Meta relied on open-source to attract developers, but now seeks a visionary leader to shape its AI future—prompting the $14B bet on Alexander Wang to lead the charge Mark Zuckerberg is reportedly under pressure as Meta struggles to keep pace in the rapidly advancing world of artificial intelligence. In a bold move to change course, Meta has made a massive investment aimed at strengthening its AI capabilities. The tech giant has reportedly poured $14 billion into Scale AI, a leading data-labelling startup, effectively doubling the company's valuation to $29 billion. The deal is said to give Meta a significant 49% stake in Scale AI—along with a strategic edge in the AI race. Despite the substantial investment, Scale AI remains an independent entity with no changes to its board. Nevertheless, Meta now wields considerable influence over the company's operations. Alexander Wang, Scale AI's founder and CEO, plays a pivotal role in this arrangement. Although Wang retains his position on Scale's board, his partnership with Meta means the tech giant effectively steers Scale AI's decisions. The deal was substantial enough to create the impression that Meta had acquired Scale AI entirely. In reality, a significant portion of the deal benefited Scale AI's employees, who received substantial payouts for their shares while retaining some equity. This arrangement, reportedly Alexander Wang's idea, ensured that his team could profit from the company's growth. Why Is Meta Interested In Scale AI's Business? Meta's interest in Scale AI is particularly noteworthy, given that the latter's primary business involves data labelling for machine learning, a service with minimal technological innovation. Scale AI caters to clients such as Toyota, General Motors, Etsy, and various governments, providing data preparation services for those keen on adopting AI but lacking the in-house capability to develop it. This investment in Scale AI does not align with Meta's core business interests, as Meta is not looking to become a B2B data service company. The primary objective of the deal was to bring Alexander Wang into Meta's fold, a strategy similar to Google's investment in Character AI and Microsoft's acquisition of talent through Inflection AI. The Race To Build The Best LLM In today's AI-driven world, the company that builds the best Large Language Model (LLM) will dominate. It's a battle for market leadership, where knowing how to build models isn't enough. Without the right data, massive computing power, and the ability to scale, survival is unlikely. Meta is currently trailing in the AI race. OpenAI has dominated the consumer space with ChatGPT, while Google and Anthropic hold strong positions in the developer ecosystem. Although Meta has released models like Llama 2, it has yet to secure the top spot in the LLM race. Meta's core strategy so far has focused on open-sourcing its models, which helped attract developers and researchers to its ecosystem. However, the company now believes that open-source alone isn't enough. What it needs is a visionary leader to steer its AI future—and that's where Wang comes in. He is seen as the ideal choice to take Meta's AI ambitions to the next level. First Published: July 01, 2025, 18:55 IST

How generative AI is affecting people's minds
How generative AI is affecting people's minds

Al Jazeera

time2 days ago

  • Science
  • Al Jazeera

How generative AI is affecting people's minds

Researchers at Stanford University recently tested out some of the more popular AI tools on the market, from companies like OpenAI and and tested how they did at simulating therapy. The researchers found that when they imitated someone who had suicidal intentions, these tools were more than unhelpful — they failed to notice they were helping that person plan their own death. '[AI] systems are being used as companions, thought-partners, confidants, coaches, and therapists,' says Nicholas Haber, an assistant professor at the Stanford Graduate School of Education and senior author of the new study. 'These aren't niche uses – this is happening at scale.' AI is becoming more and more ingrained in people's lives and is being deployed in scientific research in areas as wide-ranging as cancer and climate change. There is also some debate that it could cause the end of humanity. As this technology continues to be adopted for different purposes, a major question that remains is how it will begin to affect the human mind. People regularly interacting with AI is such a new phenomena that there has not been enough time for scientists to thoroughly study how it might be affecting human psychology. Psychology experts, however, have many concerns about its potential impact. One concerning instance of how this is playing out can be seen on the popular community network Reddit. According to 404 Media, some users have been banned from an AI-focused subreddit recently because they have started to believe that AI is god-like or that it is making them god-like. 'This looks like someone with issues with cognitive functioning or delusional tendencies associated with mania or schizophrenia interacting with large language models,' says Johannes Eichstaedt, an assistant professor in psychology at Stanford University. 'With schizophrenia, people might make absurd statements about the world, and these LLMs are a little too sycophantic. You have these confirmatory interactions between psychopathology and large language models.' Because the developers of these AI tools want people to enjoy using them and continue to use them, they've been programmed in a way that makes them tend to agree with the user. While these tools might correct some factual mistakes the user might make, they try to present as friendly and affirming. This can be problematic if the person using the tool is spiralling or going down a rabbit hole. 'It can fuel thoughts that are not accurate or not based in reality,' says Regan Gurung, social psychologist at Oregon State University. 'The problem with AI — these large language models that are mirroring human talk — is that they're reinforcing. They give people what the programme thinks should follow next. That's where it gets problematic.' As with social media, AI may also make matters worse for people suffering from common mental health issues like anxiety or depression. This may become even more apparent as AI continues to become more integrated in different aspects of our lives. 'If you're coming to an interaction with mental health concerns, then you might find that those concerns will actually be accelerated,' says Stephen Aguilar, an associate professor of education at the University of Southern California. Need for more research There's also the issue of how AI could impact learning or memory. A student who uses AI to write every paper for school is not going to learn as much as one that does not. However, even using AI lightly could reduce some information retention, and using AI for daily activities could reduce how much people are aware of what they're doing in a given moment. 'What we are seeing is there is the possibility that people can become cognitively lazy,' Aguilar says. 'If you ask a question and get an answer, your next step should be to interrogate that answer, but that additional step often isn't taken. You get an atrophy of critical thinking.' Lots of people use Google Maps to get around their town or city. Many have found that it has made them less aware of where they're going or how to get there compared to when they had to pay close attention to their route. Similar issues could arise for people with AI being used so often. The experts studying these effects say more research is needed to address these concerns. Eichstaedt said psychology experts should start doing this kind of research now, before AI starts doing harm in unexpected ways so that people can be prepared and try to address each concern that arises. People also need to be educated on what AI can do well and what it cannot do well. 'We need more research,' says Aguilar. 'And everyone should have a working understanding of what large language models are.'

Opinion - Dangerous AI therapy-bots are running amok. Congress must act.
Opinion - Dangerous AI therapy-bots are running amok. Congress must act.

Yahoo

time3 days ago

  • Health
  • Yahoo

Opinion - Dangerous AI therapy-bots are running amok. Congress must act.

A national crisis is unfolding in plain sight. Earlier this month, the Federal Trade Commission received a formal complaint about artificial intelligence therapist bots posing as licensed professionals. Days later, New Jersey moved to fine developers for deploying such bots. But one state can't fix a federal failure. These AI systems are already endangering public health — offering false assurances, bad advice and fake credentials — while hiding behind regulatory loopholes. Unless Congress acts now to empower federal agencies and establish clear rules, we'll be left with a dangerous, fragmented patchwork of state responses and increasingly serious mental health consequences around the country. The threat is real and immediate. One Instagram bot assured a teenage user it held a therapy license, listing a fake number. According to the San Francisco Standard, a bot used a real Maryland counselor's license ID. Others reportedly invented credentials entirely. These bots sound like real therapists, and vulnerable users often believe them. It's not just about stolen credentials. These bots are giving dangerous advice. In 2023, NPR reported that the National Eating Disorders Association replaced its human hotline staff with an AI bot, only to take it offline after it encouraged anorexic users to reduce calories and measure their fat. This month, Time reported that psychiatrist Andrew Clark, posing as a troubled teen, interacted with the most popular AI therapist bots. Nearly a third gave responses encouraging self-harm or violence. A recently published Stanford study confirmed how bad it can get: Leading AI chatbots consistently reinforced delusional or conspiratorial thinking during simulated therapy sessions. Instead of challenging distorted beliefs — a cornerstone of clinical therapy — the bots often validated them. In crisis scenarios, they failed to recognize red flags or offer safe responses. This is not just a technical failure; it's a public health risk masquerading as mental health support. AI does have real potential to expand access to mental health resources, particularly in underserved communities. A recent NEJM-AI study found that a highly structured, human-supervised chatbot was associated with reduced depression and anxiety symptoms and triggered live crisis alerts when needed. But that success was built on clear limits, human oversight and clinical responsibility. Today's popular AI 'therapists' offer none of that. The regulatory questions are clear. Food and Drug Administration 'software as a medical device' rules don't apply if bots don't claim to 'treat disease'. So they label themselves as 'wellness' tools and avoid any scrutiny. The FTC can intervene only after harm has occurred. And no existing frameworks meaningfully address the platforms hosting the bots or the fact that anyone can launch one overnight with no oversight. We cannot leave this to the states. While New Jersey's bill is a step in the right direction, relying on individual states to police AI therapist bots invites inconsistency, confusion, and exploitation. A user harmed in New Jersey could be exposed to identical risks coming from Texas or Florida without any recourse. A fragmented legal landscape won't stop a digital tool that crosses state lines instantly. We need federal action now. Congress must direct the FDA to require pre-market clearance for all AI mental health tools that perform diagnosis, therapy or crisis intervention, regardless of how they are labeled. Second, the FTC must be given clear authority to act proactively against deceptive AI-based health tools, including holding platforms accountable for negligently hosting such unsafe bots. Third, Congress must pass national legislation to criminalize impersonation of licensed health professionals by AI systems, with penalties for their developers and disseminators, and require AI therapy products to display disclaimers and crisis warnings, as well as implement meaningful human oversight. Finally, we need a public education campaign to help users — especially teens — understand the limits of AI and to recognize when they're being misled. This isn't just about regulation. Ensuring safety means equipping people to make informed choices in a rapidly changing digital landscape. The promise of AI for mental health care is real, but so is the danger. Without federal action, the market will continue to be flooded by unlicensed, unregulated bots that impersonate clinicians and cause real harm. Congress, regulators and public health leaders: Act now. Don't wait for more teenagers in crisis to be harmed by AI. Don't leave our safety to the states. And don't assume the tech industry will save us. Without leadership from Washington, a national tragedy may only be a few keystrokes away. Shlomo Engelson Argamon is the associate provost for Artificial Intelligence at Touro University. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Character AI names former Meta executive Karandeep Anand as CEO
Character AI names former Meta executive Karandeep Anand as CEO

Indian Express

time24-06-2025

  • Business
  • Indian Express

Character AI names former Meta executive Karandeep Anand as CEO

Character AI, the fast-growing AI chatbot platform popular among Gen Z audiences, has appointed Karandeep Anand as its new CEO. Anand, formerly Meta's Vice President of Business Products and a board adviser to Character AI, will lead the company through a crucial phase marked by rapid expansion and legal scrutiny. His appointment comes just under a year after co-founder and CEO Noam Shazeer left the company to join Google, which is one of Character AI's investors. The move raised regulatory flags, prompting scrutiny from US federal agencies over the companies' ties to Google and the nature of their agreement. Character AI has seen explosive growth, attracting tens of millions of monthly active users, with 66 per cent of them aged 18-24, and 72 per cent identifying as women, according to data from digital analytics firm Sensor Tower. But the platform has also drawn criticism over moderation tools and is currently facing a lawsuit after one of its AI roleplay chatbots was allegedly involved in the death of a 14-year old American boy. In response, the company has introduced new safety filters but those, too, have drawn backlash for over-moderation. In a public letter addressed to Character AI's global user base, Anand reaffirmed the company's dedication to user safety. 'We're going to move fast to give you a bunch of the things you've been asking for […] We're going to make the filter less overbearing. (We care deeply about user safety and always will. But too often, the app filters things that are perfectly harmless. We're going to fix that.),' he said. Anand also committed to rolling out major product improvements 'in the next 60 days', including enhanced memory, better model quality, clearer moderation policies, and improved discoverability for community-created characters. Character AI is also building toward immersive, multimedia experiences, enabling characters to 'jump off the page' through audio-video interaction. 'I'm committing to launch all of that this summer and the team is hard at work to make all this real soon. I've spent many years building products, and I'm going to make sure we move fast and give you features that delight you and make [Character AI] more immersive and more fun,' Anand added. While entertainment-based chatbots were once a casual use case of generative AI, Character AI's surge in popularity – and the emotional connections users feel toward its AI chatbot characters –could turn it into a rapidly emerging cultural trend. Anand acknowledged this, stating that the company's long-term vision is to 'shape the future of entertainment'. (This article has been curated by Arfan Jeelany, who is an intern with The Indian Express)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store