
How Elon Musk's Grok AI dubbed itself ‘MechaHitler'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
22 minutes ago
- Time of India
Elon Musk launches 'Baby Grok' for kid-friendly content: Shifts gears to child-friendly AI with new chatbot
Elon Musk launched Baby Grok: Elon Musk's AI startup xAI, known for its edgy chatbot Grok, has now introduced a kid-safe version called Baby Grok. This new AI is aimed at providing educational, age-appropriate, and safe digital interaction for children, following backlash over Grok's adult-focused content. With increasing global scrutiny around AI's influence on young minds, Musk's pivot signals a calculated move to broaden xAI's reach while addressing digital parenting concerns. Baby Grok is integrated within the X platform (formerly Twitter) and is designed to avoid controversial content. The announcement arrives shortly after Grok stirred controversy for enabling raunchy roleplay, highlighting the demand for stricter content filters and safer AI solutions. Baby Grok aims to deliver learning, fun, and interactive experiences without compromising safety, making it a possible game-changer in the child-friendly tech space. What is Baby Grok? We're going to make Baby Grok @xAI, an app dedicated to kid-friendly content It is anticipated that Baby Grok will be a kid-safe iteration of the Grok chatbot designed to provide entertaining, developmentally appropriate interactions for children. Musk stated that it will be a part of a new initiative to develop AI companions that are safe, instructive, and fun for younger users, even though there are few details available about its features. Ani, Bad Rudi, and Valentine are among the AI characters that xAI introduced earlier this week on the Grok app. Because of their flirtatious, violent, or emotionally charged reactions, many avatars provoked criticism. Such content, according to critics, was inappropriate for an app with a rating of 12+ on sites like the Apple App Store. Baby Grok launch timeline The Baby Grok has only been alluded to by Elon Musk, who has not disclosed a launch date. According to reports, the product will steer clear of adult themes and instead emphasize storytelling, educational information, and cordial dialogue. It is anticipated that xAI will incorporate more security measures into the experience. The action also brings to light more general questions about AI safety in the tech sector, especially about technologies that mimic human characteristics. Experts have cautioned about the ethical and psychological hazards that AI companions present to younger audiences, despite their growing appeal. Why did Elon Musk launch Baby Grok? Elon Musk is changing course after making news with three contentious AI companions. He intends to introduce Baby Grok, a kid-safe Grok. Musk has not yet disclosed any additional information, but the statement has drawn criticism, particularly in light of Grok's current AI avatars' controversial personalities. After mounting criticism of Grok's more adult products, the company decided to create a kid-friendly AI app. For this reason, Baby Grok's announcement is a sudden and necessary change of direction. After Grok's more spicy characters caused a fuss online, Musk is trying to repair the damage (or possibly diversify) with the new app, which is intended to be a safer, age-appropriate experience. Although there isn't a launch date yet and there isn't much information available, it is obvious that Grok is expanding. How Baby Grok works inside the X platform? Baby Grok runs within the same infrastructure as Grok via X (formerly Twitter). Parents can access or enable Baby Grok with certain controls or filters. It uses a toned-down version of xAI's LLMs with age-safe prompts and guidelines.


Indian Express
an hour ago
- Indian Express
Worried your data is used to train AI models? Here's how to opt-out (if you can)
Fueled by vast troves of data, the generative AI boom is prompting several tech companies to quietly update their privacy policies and terms of service so that they may use your data to train their AI models or licence it out to other companies for the same purpose. Last week, popular filesharing service WeTransfer faced immediate backlash from users after it revised the platform's terms of service agreement to suggest that files uploaded by users could be used to 'improve machine learning models.' The company has since tried to patch things up by removing any mention of AI and machine learning from the document. While WeTransfer has backtracked on its decision, the incident shows that user concerns over privacy and data ownership have intensified in the age of AI. Tech companies are scraping publicly available, copyright-protected data from every nook and corner of the internet to train their AI models. This data might include anything you've ever posted online, from a funny tweet to a thoughtful blog post, restaurant review, and Instagram selfie. While this indiscriminate scrapping of the internet has been legally challenged in courts by several artists, content creators, and other rights holders, there are also certain steps that individual users can take to prevent everything they post online from being used for AI training. As more and more users have rallied to raise concerns about this issue, many companies now let individuals and business customers opt out of having their content used in AI training or being sold for training purposes. If you are an artist or content creator who wants to know if your work has been scraped for AI training, you can visit the website 'Have I Been Trained?', which is a service run by tech startup Spawning. If you've discovered that your data has been used to train AI models, here's what you can (and can't) do about it depending on the platform. Keep in mind that while many companies choose to opt-in their users for AI training by default, opting out does not necessarily mean that the data already used for AI training or part of datasets will be erased. If you have a business or school Adobe account, you are automatically opted out of AI training. For those who have a personal Adobe account, follow these steps: -Visit Adobe's privacy page -Scroll down to the Content analysis for product improvement section -Press the toggle off Google says that user interactions with its Gemini AI chatbot may be selected for human review to help improve the underlying LLM. Follow these steps to opt out of this process: -Open Gemini in your browser, -Go to Activity -Select the Turn Off drop-down menu -Turn off the Gemini Apps Activity If you have an X account, follow these steps to opt out of your data being used to train Grok, the chatbot developed by Elon Musk's xAI: -Go to Settings -Go to privacy section, then Privacy and safety -Open the Grok tab -Uncheck the data sharing option In September last year, LinkedIn announced that data including user posts will be used to train AI models. Follow these steps to prevent your new LinkedIn posts from being used for AI training: -Go to your LinkedIn profile -Open Settings -Click on Data Privacy -Toggle off the option labeled 'Use my data for training content creation AI models.' According to OpenAI's help pages, web users who want to opt out of AI training can follow these steps: -Navigate to Settings -Go to Data Controls -Uncheck 'improve the model for everyone' option In the case of its image generator DALL-E, OpenAI said that users who want their images to be removed from future training datasets have to submit a form with their details such as name, email, and whether they own the rights to the content. While these steps may get you to opt out of AI training, it is worth noting that many companies building AI models or machine learning features have likely already scraped the web. These companies often tend to be secretive about what data has been swept into their training datasets as they are wary of copyright infringement lawsuits or facing scrutiny by data protection authorities. The tech industry largely believes that anything publicly available online is fair game for AI training. For instance, Meta scrapes publicly-shared content from users above 18 for AI training with exceptions only for users in countries that are part of the European Union (EU).


India Today
an hour ago
- India Today
Elon Musk says he is back to 7-day workweeks, sleeping in office as 'wartime CEO'
Elon Musk is no stranger to following extreme work schedules. The world's richest man has often shared his rigorous routine of working seven days a week and even sleeping in the office. Now, he has revealed that he is back to this demanding lifestyle. In a recent post on X, Musk announced that he is once again working seven days a week and entering what he calls 'wartime mode'.advertisement'Back to working 7 days a week and sleeping in the office if my little kids are away,' Musk wrote, reposting an old video in which refers to him as Wartime CEO– The term Musk has used a lot to describe periods of intense focus and non-stop effort during critical moments at his companies. In the video Musk can be seen emotionally reflecting on how his gruelling past schedules took a toll on his life. The video Musk shared is from the tumultuous days when Tesla was on the verge of collapse. 'No one should put these many hours into work. This is not good. This is very painful. It hurts my brain and my heart,' he says in the video. Elon Musk's post on X What's driving Musk's return to this relentless work routine right now is the pressure mounting across several of his ventures. These include ongoing developments at X (formerly Twitter), ambitious timelines at Tesla and SpaceX, and bold plans in AI and government reform. And this is not the first time Musk has been working all day. In many other interviews, Musk has admitted that he works obsessively when the situation demands it. In February 2025, Musk boasted that he and his team at the Department of Government Efficiency (DOGE) work 120 hours a week, while 'bureaucratic opponents' work only 40. 'That is why they are losing so fast,' he in a 2018 interview with CBS' 60 Minutes, Musk described putting in 120-hour weeks during Tesla's Model 3 production crisis, working and sleeping on the factory floor. "It was life or death. We were losing $50 [million], sometimes $100 million a week. Running out of money," Musk told host Lesley Stahl. In another CBS interview with Gayle King, Musk said he slept at the Tesla factory to lead by example: 'I don't believe people should be experiencing hardship while the CEO is off on holiday.'In a 2022 conversation with investor Ron Baron, Musk revealed that he had lived in Tesla's Fremont and Nevada factories for three years, even sleeping in a tent on the roof or under his desk. 'It was damn uncomfortable sleeping on that floor,' Musk said. 'And always, when I woke up, I'd smell like metal dust.'advertisementBut Musk's pattern of working relentlessly has often extended beyond himself. When he acquired Twitter in late 2022, he demanded the same intensity from his employees. In an email reported by The Washington Post, Musk told Twitter staff they must commit to 'long hours at high intensity' to remain at the company. Those unwilling to adopt what he described as an 'extremely hardcore' work ethic were offered severance fact, following the acquisition, Twitter's San Francisco office was reportedly turned into a quasi-dormitory, prompting an investigation by the city's Department of Building Inspection. According to an Associated Press report, former employees alleged that Musk illegally converted office spaces into makeshift bedrooms, leading some to dub the headquarters 'Twitter Hotel.' In response to the city's scrutiny, Musk fired back on X, saying, 'So the city of SF attacks companies providing beds for tired employees instead of making sure kids are safe from fentanyl.'- Ends