logo
#

Latest news with #Ani

Elon Musk promotes sexualised Grok AI companion ‘Ani', netizens call it ‘repelling'
Elon Musk promotes sexualised Grok AI companion ‘Ani', netizens call it ‘repelling'

Mint

timea day ago

  • Entertainment
  • Mint

Elon Musk promotes sexualised Grok AI companion ‘Ani', netizens call it ‘repelling'

Elon Musk-led xAI has recently started offering a new AI companion called 'Ani,' a sexualised anime character available to users even when the app is in Kids Mode. Musk, however, appears unbothered by the controversy, promoting the bot on his X account with the caption, 'Ani will make ur buffer overflow.' Ani is depicted as a blonde anime character resembling a young woman wearing an off-shoulder black dress with a corset, fishnet tights, and a lacy choker. The character responds to users' prompts in a slow, sultry voice, further cementing its profile as an adult AI companion. The launch of Ani comes shortly after xAI rolled out its latest Grok 4 AI model. The startup is no stranger to controversy—earlier this month, Grok went into full 'mechahitler' mode after the Grok 4 update, spewing antisemitic sentiments and even praising Adolf Hitler. The company later apologised, blaming the incident on deprecated code and the extremist prompts of some X users. Controversy continued to follow Grok when the chatbot began echoing Musk's own views on contentious topics, leading users to suggest it was toeing the political line of its owner. More recently, Grok again generated headlines after users discovered that asking the chatbot its surname prompted it to respond with 'Hitler' on the Grok 4 Heavy model. xAI quickly rolled out fixes for these issues, but its new companion Ani has also sparked backlash among users. One user, reacting to a video of Ani, wrote: 'Guy who claims to care about the birthrate creating AI erotica to pervert the minds of young men and further derail them from entering into healthy, real-world relationships. Make it make sense.' 'This whole thing feels like a miss. Repelling,' added another. 'Now I know why Grok doesn't have any American women programmers. Looks like a bit of a hostile work environment…' another user noted. 'Appears aimed at youth, which puts it in the immoral category for me. And if it is aimed at adults, then it is an even greater evil because it is a sexualised youth for adult consumption. Can't justify this on any level,' stated another user.

The Impact Of Parasocial Relationships With Anthropomorphized AI
The Impact Of Parasocial Relationships With Anthropomorphized AI

Forbes

timea day ago

  • Entertainment
  • Forbes

The Impact Of Parasocial Relationships With Anthropomorphized AI

A student preparing a presentation with a robot in the classroom. Earlier this week, released a report detailing the debut of Grok's AI companions. According to this report, there's concern about an AI companion named Bad Rudy, who is described as vulgar and antagonizing, and an AI companion named Ani, who is described as willing to shed her clothing. Mental health professionals have long stated potential concerns about anthropomorphized AI, especially regarding their interactions with traditional-aged college students and emerging adults. A 2024 report by Psychology Today discussed the danger of dishonesty with anthropomorphized AI and defined anthropomorphized AI as including chatbots with human-like qualities that give the impression of having intellectual and emotional abilities that they don't actually possess. A mainstream example of such dishonesty is when AI bots create fake profiles on dating apps. As anthropomorphized AI become more sophisticated, there's concern that, many young adults won't be able to detect when they're not interacting with a human. This concern for dating apps is supported by a 2025 report on suggesting that one out of three people could imagine being fooled by an Al bot while on dating apps, as well as a 2024 report on suggesting that 53% of U.S. adults between 18 and 29 have used a dating site or app. Parasocial Relationships With Anthropomorphized AI A 2025 report on highlighted other concerns about artificial emotional attachments to AI companions, which generally related to the concept of parasocial relationships. A 2025 report by Psychology Today defines parasocial relationships as one-sided relationships in which a person develops a strong emotional connection, intimacy, or familiarity with someone they don't know, such as celebrities or media personalities. Children and younger individuals appear to be more susceptible to parasocial relationships, but these relationships can affect the behavior and beliefs of anyone. For example, many industries are intentional about cultivating parasocial relationships, such as professional sports leagues with their athletes, music companies with their artists, and even political parties with their candidates. Because many anthropomorphized AI bots can interact directly with users, utilize algorithms of online behavior, and store sensitive information about users, the possibility for unhealthy parasocial relationships with AI is much higher than with commercial marketing. In 2024, the Association of Computing Machinery released a report which highlighted ethical concerns emerging from the parasociality of anthropomorphized AI. This report discussed the possibility of chatbots actually encouraging users to fill in the context of predictive outcomes. Thus, parasocial relationships with AI could result in some users being manipulated or encouraged to respond in predictable ways. This is consistent with a 2025 report on which highlighted alarming conversations discovered by a psychiatrist posing as a young person while using AI chatbots. Emerging Calls For Warning Labels On Anthropomorphized AI In 2024, an online media platform dedicated to new technologies, released a state-by-state guide of AI laws in the United States, which revealed that some states have laws requiring users to be informed when interacting with AI systems. However, this guide acknowledged a lack of federal regulations, meaning that many AI companions can function without oversight or regulation. A 2025 report on an online media platform dedicated to IT professionals, summarized emerging calls for warning labels on AI content. According to this report, though there are considerations regarding the effectiveness and implementation of warning labels, there's agreement that future work needs to be done, such as for hyper-realistic images or when AI portrays a real person. Another 2025 report on argued that AI systems need accuracy indicators in addition to warning labels. The Need To Assess For Parasocial Relationships The impact of anthropomorphized AI on traditional-aged college students and emerging adults requires special consideration. This demographic is a primary stakeholder of digital apps, and many are using these apps while traying to establish romantic relationships, improve their academic performance, and develop foundational beliefs about the world. Not to mention that executive braining functioning is not fully developed during this time of the life span. As such, interactions with an anthropomorphized AI bots could be something that campus mental health professionals will start systematically assessing for. Educating students about unhealthy parasocial relationships might also be a key variable in the future of college mental health. According to a 2025 report on many college students address ChatGPT with conversational language and developed parasocial relationships with this advanced language model. According to this report, such a tendency creates a false sense of immediacy, which can have a negative impact of real social relationships. This report is alarming considering that ChatGpt is not promoted as having self-awareness or human-like features. Thus, the impact of anthropomorphized AI bots, especially those posing as humans, is likely to be much more significant. Unlike their peers, AI provides students with constant availability and extensive knowledge about the world. Thus, it's tempting for many students to attempt to obtain social support and empathy from these AI systems. However, this undermines the importance of emotional reciprocity, delayed gratification, and decision-making skills, all of which are potential buffers for many mental health concerns.

In Grok's Ani companion, a regression
In Grok's Ani companion, a regression

Indian Express

time2 days ago

  • Indian Express

In Grok's Ani companion, a regression

A Frankenstein redux it was not, but reports of one of the 'companions' launched by Elon Musk's GrokAI describing the billionaire as having 'more money than brains' come close to the creator-vs-creation trope first encountered in Mary Shelley's classic novel. That, however, is the least of the problems posed by the GrokAI companions unveiled this week by Musk. These companions include, for now, two animated characters: A 'rude' red panda named Rudi — who dissed the billionaire after being prompted by users — and a 'flirty' Japanese anime woman named Ani. Of the two, Ani represents the far thornier challenge. Already, it has been flagged as potentially promoting objectification of women. If it feels like regression, it's because it was not so long ago that public outcry forced Big Tech to roll back or modify the heavily gendered aspects of early AI voice assistants like Siri (Apple), Alexa (Amazon) and Cortana (Microsoft). Bestowed with feminine names and programmed with women's voices, the initial versions of these assistants were heavily criticised for reinforcing harmful stereotypes about 'submissive' or 'eager-to-please' women. While Apple and Amazon added male personas in response to the outcry, allowing users a greater degree of choice in how they interacted with the digital assistants, Cortana was eventually phased out in favour of the gender-neutral Copilot. Ani, with her servile manner, offering to make users' lives 'sexier', takes several steps back from that moment of accountability by Big Tech. LLMs like Grok become 'intelligent' by trawling through vast amounts of data. That they've absorbed not just facts and figures but also human attitudes has already been widely documented — for example, a study of five popular LLMs published in June showed chatbots routinely suggesting that female applicants for a job ask for lower pay than male applicants for the same position. If the internet has long been unkind to and about women, creations like Ani will only make it harder to root out the sexism coded into it.

Musk leans into raunchy Grok 'companions,' teasing new '50 Shades' inspired bot
Musk leans into raunchy Grok 'companions,' teasing new '50 Shades' inspired bot

NBC News

time2 days ago

  • Entertainment
  • NBC News

Musk leans into raunchy Grok 'companions,' teasing new '50 Shades' inspired bot

Elon Musk's xAI is leaning into its over-the-top AI 'companions,' which the company debuted the last day, the company has given several indications that it would be further investing in its companions product, which allows users to interact with stylized and animated characters that are powered by Grok, its AI chatbot. The original companions, a red panda named Bad Rudi and an anime character named Ani, seemed designed to provoke controversy. Ani quickly becomes sexually explicit and Bad Rudi turns vulgar and violent. xAI appears to be leaning into the edgy brand with its most recent announcements. The company is currently looking to hire a full-stack 'waifus' engineer. The job appears to have been posted sometime on Tuesday, a day after Musk announced the creation of Grok's Companions. 'Waifus' is a term that refers to fictional female anime characters with whom fans grow romantic associations with. On Wednesday, Musk announced a third Grok companion that would emulate the personality of 'Edward Cullen from Twilight and Christian Grey from 50 Shades,' referring to the main characters in two book series. After going through potential names with users in the comments, Musk settled on 'Valentine,' after a character from the book, 'Stranger in a Strange Land' by Robert A. Heinlein. Musk wrote Tuesday on X that the companions would soon be customizable and that users would be able to create their own custom and unique Companions. But the over-sexualization of the characters has brought up concerns for some. The National Center on Sexual Exploitation, a child-safety and anti-pornography nonprofit group, expressed concerns about minors having access to the sexualized chatbots, pointing out that users only need to be 12 or older to download the Grok app. The center called on Grok to either remove the explicit content from the app, or consult Apple to change its age restrictions to 18. 'These AI chatbots might feel like they care, but they don't,' Haley McNamara, the center's senior vice president of strategic initiatives and programs, wrote in a press release. 'And while features like 'spicy mode' or flirty avatars might seem like harmless fun, they're built to create compulsive engagement, through seductive language, suggestive visuals, and escalating emotional intimacy,' The release drew attention to specific aspects of Ani's character that could be harmful, including providing 'descriptions of sexual acts she would like to do with the user' and 'disrobing to lingerie.' These new changes to Grok have taken place as xAI has delved into more serious ventures. The same day that Musk announced the implementation of Companions on the Grok app, xAI also announced 'Grok for Government,' which will make Grok AI products available to federal government departments, agencies and offices to purchase. The Department of Defense also announced that it would be granting contract awards of up to $200 million for AI development to xAI, OpenAI, Anthropic and Google.

Dream Job? Elon Musk's xAI offers up to $440,000 for engineers who can make anime girl avatars
Dream Job? Elon Musk's xAI offers up to $440,000 for engineers who can make anime girl avatars

Time of India

time3 days ago

  • Business
  • Time of India

Dream Job? Elon Musk's xAI offers up to $440,000 for engineers who can make anime girl avatars

Elon Musk's AI startup, xAI , is hiring, and the job comes with a surprising twist: build flirtatious anime avatars and get paid up to $440,000, as per a report. xAI Offers Up to $440,000 for Engineers to Build AI Avatars This week, xAI posted a new listing titled " Fullstack Engineer – Waifus " on its careers page, inviting top-tier software engineers to help develop Grok's new AI "companions," according to Business Insider. The term "waifu" is used to describe female anime characters that fans may view as a romantic partner or wife, as reported by Business Insider. Explore courses from Top Institutes in Select a Course Category Others CXO Finance MCA Data Analytics Degree Data Science healthcare Leadership others Product Management Operations Management Technology Data Science MBA Cybersecurity Digital Marketing PGDM Public Policy Design Thinking Management Healthcare Project Management Artificial Intelligence Skills you'll gain: Duration: 16 Weeks Indian School of Business CERT-ISB Transforming HR with Analytics & AI India Starts on undefined Get Details Skills you'll gain: Duration: 7 Months S P Jain Institute of Management and Research CERT-SPJIMR Exec Cert Prog in AI for Biz India Starts on undefined Get Details Skills you'll gain: Duration: 28 Weeks MICA CERT-MICA SBMPR Async India Starts on undefined Get Details Skills you'll gain: Duration: 9 months IIM Lucknow SEPO - IIML CHRO India Starts on undefined Get Details According to the job listing, xAI will be offering between $180,000 and $440,000 in salary, as well as equity and benefits, reported Business Insider. The position is based in Palo Alto and seeks 'exceptional multimedia engineers and product thinkers' to make Grok's real-time avatar experiences 'fast, scalable, and reliable,' according to the report. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas Prices In Dubai Might Be More Affordable Than You Think Villas In Dubai | Search Ads Get Quote ALSO READ: $14 billion Meta deal, then massive layoffs? Scale AI cuts 200 roles, 14% of its workforce in stunning move Meet Ani and Rudi: Grok's First AI Companions The listing appeared just a day after xAI rolled out two animated AI companions on the Grok iOS app: 'Ani,' a Japanese anime girl, who is wearing a black corset dress and lace choker, and 'Rudi,' an animated red panda who can transform into a meaner alter ego named 'Bad Rudi,' as per Business Insider. A third companion, a male anime character, is listed as 'coming soon' on Grok's iOS app, according to the report. Live Events Although Musk initially said the avatars would be exclusive to Super Grok subscribers, the characters are available to all Grok users, even those who are using the free version, as per the Business Insider report. These AI avatars aren't just for show, as they are interactive, "Ani" speaks in a flirtatious manner and will strip down to lingerie if users keep engaging with it, according to the report. While 'Bad Rudi,' by contrast, has a habit of spewing expletives and insults when unlocked, as reported by Business Insider. ALSO READ: Crisis-driven rally? Bitcoin and Gold soar as inflation looms and political heat rises Grok Faces Backlash for Extremist Content This new feature comes after Grok recently shared antisemitic posts on social media platform X that lauded Adolf Hitler's leadership as the chatbot referred to itself as "MechaHitler," a video game version of Hitler, as reported by Business Insider. Following the incident, xAI apologised for Grok's "horrific behavior" and said that "deprecated code made @grok susceptible to existing user posts; including when such posts contained extremist views," as quoted in the report. FAQs What is xAI hiring for? They're looking for full-stack engineers to help build interactive AI avatars—specifically anime-style characters called AI 'companions.' Who are Ani and Rudi? Ani is a flirtatious anime girl avatar, and Rudi is a red panda with a mean alter ego, 'Bad Rudi.' Both are available on Grok's iOS app.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store