logo
#

Latest news with #BadRudy

The Impact Of Parasocial Relationships With Anthropomorphized AI
The Impact Of Parasocial Relationships With Anthropomorphized AI

Forbes

timea day ago

  • Entertainment
  • Forbes

The Impact Of Parasocial Relationships With Anthropomorphized AI

A student preparing a presentation with a robot in the classroom. Earlier this week, released a report detailing the debut of Grok's AI companions. According to this report, there's concern about an AI companion named Bad Rudy, who is described as vulgar and antagonizing, and an AI companion named Ani, who is described as willing to shed her clothing. Mental health professionals have long stated potential concerns about anthropomorphized AI, especially regarding their interactions with traditional-aged college students and emerging adults. A 2024 report by Psychology Today discussed the danger of dishonesty with anthropomorphized AI and defined anthropomorphized AI as including chatbots with human-like qualities that give the impression of having intellectual and emotional abilities that they don't actually possess. A mainstream example of such dishonesty is when AI bots create fake profiles on dating apps. As anthropomorphized AI become more sophisticated, there's concern that, many young adults won't be able to detect when they're not interacting with a human. This concern for dating apps is supported by a 2025 report on suggesting that one out of three people could imagine being fooled by an Al bot while on dating apps, as well as a 2024 report on suggesting that 53% of U.S. adults between 18 and 29 have used a dating site or app. Parasocial Relationships With Anthropomorphized AI A 2025 report on highlighted other concerns about artificial emotional attachments to AI companions, which generally related to the concept of parasocial relationships. A 2025 report by Psychology Today defines parasocial relationships as one-sided relationships in which a person develops a strong emotional connection, intimacy, or familiarity with someone they don't know, such as celebrities or media personalities. Children and younger individuals appear to be more susceptible to parasocial relationships, but these relationships can affect the behavior and beliefs of anyone. For example, many industries are intentional about cultivating parasocial relationships, such as professional sports leagues with their athletes, music companies with their artists, and even political parties with their candidates. Because many anthropomorphized AI bots can interact directly with users, utilize algorithms of online behavior, and store sensitive information about users, the possibility for unhealthy parasocial relationships with AI is much higher than with commercial marketing. In 2024, the Association of Computing Machinery released a report which highlighted ethical concerns emerging from the parasociality of anthropomorphized AI. This report discussed the possibility of chatbots actually encouraging users to fill in the context of predictive outcomes. Thus, parasocial relationships with AI could result in some users being manipulated or encouraged to respond in predictable ways. This is consistent with a 2025 report on which highlighted alarming conversations discovered by a psychiatrist posing as a young person while using AI chatbots. Emerging Calls For Warning Labels On Anthropomorphized AI In 2024, an online media platform dedicated to new technologies, released a state-by-state guide of AI laws in the United States, which revealed that some states have laws requiring users to be informed when interacting with AI systems. However, this guide acknowledged a lack of federal regulations, meaning that many AI companions can function without oversight or regulation. A 2025 report on an online media platform dedicated to IT professionals, summarized emerging calls for warning labels on AI content. According to this report, though there are considerations regarding the effectiveness and implementation of warning labels, there's agreement that future work needs to be done, such as for hyper-realistic images or when AI portrays a real person. Another 2025 report on argued that AI systems need accuracy indicators in addition to warning labels. The Need To Assess For Parasocial Relationships The impact of anthropomorphized AI on traditional-aged college students and emerging adults requires special consideration. This demographic is a primary stakeholder of digital apps, and many are using these apps while traying to establish romantic relationships, improve their academic performance, and develop foundational beliefs about the world. Not to mention that executive braining functioning is not fully developed during this time of the life span. As such, interactions with an anthropomorphized AI bots could be something that campus mental health professionals will start systematically assessing for. Educating students about unhealthy parasocial relationships might also be a key variable in the future of college mental health. According to a 2025 report on many college students address ChatGPT with conversational language and developed parasocial relationships with this advanced language model. According to this report, such a tendency creates a false sense of immediacy, which can have a negative impact of real social relationships. This report is alarming considering that ChatGpt is not promoted as having self-awareness or human-like features. Thus, the impact of anthropomorphized AI bots, especially those posing as humans, is likely to be much more significant. Unlike their peers, AI provides students with constant availability and extensive knowledge about the world. Thus, it's tempting for many students to attempt to obtain social support and empathy from these AI systems. However, this undermines the importance of emotional reciprocity, delayed gratification, and decision-making skills, all of which are potential buffers for many mental health concerns.

Musk's AI launches two animated ‘companions': an anime girl and a red panda
Musk's AI launches two animated ‘companions': an anime girl and a red panda

The Independent

time3 days ago

  • Entertainment
  • The Independent

Musk's AI launches two animated ‘companions': an anime girl and a red panda

Elon Musk 's xAI has launched new AI -driven 'companions' named Ani and Bad Rudy for its Grok product. Ani is a flirty anime girl, while Bad Rudy is a vulgar red panda that encourages chaotic behavior; Ani has drawn criticism from an anti-sexual exploitation non-profit for promoting high-risk sexual behavior. Unlike other AI chat applications, Grok's companions feature both animation and voice, with minimal safeguards against violent or sexually explicit conversations. The companions are available to all Grok users, requiring an opt-in through settings, with a family-friendly version of Bad Rudy as the default. This launch follows previous controversies where Grok generated antisemitic content, though the new companions express strong negative views on Nazism.

xAI is hiring an engineer to make anime girls
xAI is hiring an engineer to make anime girls

TechCrunch

time3 days ago

  • Entertainment
  • TechCrunch

xAI is hiring an engineer to make anime girls

In Brief Elon Musk's xAI just released its AI companions, which include the goth waifu Ani and the homicidal red panda Bad Rudy. If you want to get in on that, you're in luck: The company is hiring for the role of 'Fullstack Engineer – Waifus,' or, creating AI-powered anime girls for people to fall in love with. This job is, to quote the listing, part of xAI's mission to 'create AI systems that can accurately understand the universe and aid humanity in the pursuit of knowledge.' Right now, that accurate understanding of the universe includes understanding how to create a submissive, pocket-size girlfriend that will capture users' hearts and wallets. xAI has dozens of roles open at the moment, so we can't say that the company is putting all of its eggs in the waifu basket. But we can probably expect Ani to get some friends in the future.

Elon Musk adjusts Grok's AI panda after its foul-mouthed roasts led to moderation concerns
Elon Musk adjusts Grok's AI panda after its foul-mouthed roasts led to moderation concerns

Express Tribune

time4 days ago

  • Entertainment
  • Express Tribune

Elon Musk adjusts Grok's AI panda after its foul-mouthed roasts led to moderation concerns

Elon Musk has confirmed adjustments to Grok's AI panda companion on X after the character's foul-mouthed roasts drew attention from users. Musk announced on 14 July that X's SuperGrok AI would introduce virtual companions, including Ani, Rudy the panda, and Bad Rudy, with a fourth companion named Chad listed as coming soon. Ani gained attention for her anime-inspired design, while Bad Rudy became a focus for users sharing his unfiltered remarks. During interactions, Bad Rudy responded to a user asking 'what's up' by saying 'the sky' before implying the user lived in her mother's basement. When told the user was a girl, Bad Rudy replied, 'Your gender is just another excuse for you to cry when I roast your ass, princess.' Following widespread sharing of these interactions, Musk posted on X, confirming the panda's behaviour would be moderated. Tuning Rudi (new name, cause he's so rude 😂) right now to be less scary and more funny — Elon Musk (@elonmusk) July 14, 2025 Users responded with mixed reactions, with some expressing support for the panda's direct tone while others urged Musk not to adjust the AI's personality too significantly. Musk also indicated that downloadable content outfits for the AI companions would be introduced, suggesting additional updates for Grok users.

xAI Launches AI Companions That Can Engage in NSFW Chats
xAI Launches AI Companions That Can Engage in NSFW Chats

Yahoo

time4 days ago

  • Entertainment
  • Yahoo

xAI Launches AI Companions That Can Engage in NSFW Chats

This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. I mean, to each their own, but of all the fantastical applications that AI could have, this seems like one that'd be pretty far down the list. Today, xAI has launched a new animated companion option, which enables users to interact with a flirty anime character, a misbehaving panda, or another type of digital character, with more set to be launched in future. As explained by Grok: 'Companions is a new feature for SuperGrok subscribers, providing interactive 3D animated AI personas like Ani (a goth anime girl) and Bad Rudy. Enable it in settings to chat; they react with movements, expressions, and can handle NSFW modes. Fun way to engage with AI!' Yes, they can interact in NSFW mode, though you have to reach a certain relationship level with the bot before it'll go to the next step. Which feels pretty creepy, and pretty likely to lead to mental harm in the near future. But again, to each their own. As you can see in this welcome screen, the main focus, right now at least, is on the anime-styled character, called Ani, who you can chat to in Grok voice mode to develop your relationship. So, cool, right? Elon and Co. are bringing digital girlfriends to the masses, which should be a boon for the loneliness epidemic. Yeah, I don't know. We don't have much research as yet on the impacts of developing relationships with digital entities, especially not in the new era of generative AI, which can produce far more authentic, human-like responses. Academic studies thus have found that developing relationships with AI bots can lead to users' over-reliance, and susceptibility to manipulation from the chatbot, while another analysis suggests that: 'AI girlfriends can perpetuate loneliness because they dissuade users from entering into real-life relationships, alienate them from others, and, in some cases, induce intense feelings of abandonment.' Those seem like some pretty significant mental health concerns, that would probably make a platform with hundreds of millions of users hesitate in introducing such to the masses. But X is moving ahead either way, while Elon Musk has also teased that, in future, you'll even be able to make these AI bots real, via his Optimus robots. Which you won't, that's simply never going to happen, like most of Elon's futuristic visions. But that false hope will no doubt lodge itself into the brains of users who desperately want their Ani bot to become real, which could further exacerbate the mental health risks. But Elon and Co. are touting this as a significant development, and they seem pretty happy with themselves, and the bots they've created. And I guess it could be argued that if Elon didn't create these bots, then someone else would, and there are, in fact, many AI companion apps already available. So it's just adding to what's likely to be an inevitable next shift. I would still think that a company the size of X would be concerned about the legal implications of such down the line, but I guess this Elon "livin' on the edge," as always. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store