Latest news with #X


CNN
21 hours ago
- Business
- CNN
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'


Forbes
a day ago
- Business
- Forbes
Can Elon Musk's Change To Ads Save X?
HOLMES-CHAPEL, ENGLAND - OCTOBER 16: Elon Musk account on Twitter X is displayed on a smartphone on ... More October 16, 2023 in Holmes Chapel, United Kingdom. Elon Musk took to X to talk about the size of ads on the social media site. Musk posted on June 26, '𝕏 is moving to charging for ads based on vertical size, so an ad that takes up the whole screen would cost more than an ad that takes up 1/4 of the screen, otherwise the incentive is to create giant ads that impair the user experience.' For frequent users of the site, formerly known as Twitter, large ads have become more commonplace. Advertisers often post rectangular video and carousel ads with a ratio of 1.91:1. These ads take up more of the feed than the traditional 16:9 ads, which used to be more common on the site. Later in the night on the 26, Musk also posted, 'Starting tomorrow, the esthetic (sic) nightmare that is hashtags will be banned from ads on 𝕏.' It is unclear if 'Hashmojis,' previously known as branded hashtags, will be part of this ban or when it will take effect. While Musk stated that larger ads 'impair the user experience,' it may not be that simple. In a 2024 study, researchers from Stanford, Carnegie Mellon and Meta found that the presence of ads didn't significantly affect Facebook users' experience, 'suggesting that either the harmful effects of ads are relatively small or that certain benefits offset the harms.' At the same time, limiting ads feels like a good thing for users. X has seen a decline in users in several markets. In April 2025, X released a report that it had lost over 11 million users in the EU. In 2024, NBC News reported, 'On the day after the election, Nov. 6, X experienced its largest user exodus since Elon Musk bought the platform in 2022.' Since Musk bought X in 2022, users have complained about everything from the rise of the Alt-Right on the platform to outages. While a change to ads won't fix these problems, it may help revenue and possibly even the user experience.


CNN
a day ago
- Business
- CNN
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'


CNN
a day ago
- Business
- CNN
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'
Yahoo
a day ago
- Entertainment
- Yahoo
X Announces New Original Program From the NFL to Boost Sports Engagement
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. X has announced its 18th X Originals video series, with a new 'NFL Top 100' program set to air exclusively on the platform from next week. As explained by X: 'Produced by NFL Films, 'NFL Top 100' will feature three-to-five-minute episodes highlighting the league's top 100 players, as voted on exclusively by current NFL players. Episodes will debut weekdays at 10 a.m. and 11 a.m. ET on X and NFL+ beginning June 30 and running through Friday, August 29.' So, it'll also air on NFL+, which makes it a semi-exclusive, I guess. But either way, it'll give X another big sports show to add to its slate of video programming, as it looks to boost video engagement, in line with its publicly stated 'video-first' focus. X has maintained Twitter's broadcast partnership with the NFL, and has worked with the league on various activations, including its in-app gameday portal. And given that sports is the most popular topic of discussion in the app, and the NFL is the most discussed sport, it makes sense for X to make this a focus, especially as Meta looks to incorporate its own sports engagement options on Threads. It also makes sense for the NFL, with millions of fans engaging in the app. The series will run for 10 weeks, leading into the 2025 NFL season, and will ideally help to maintain X as a key platform for NFL fans. Though, given its 'video first' focus, X is still lagging behind in terms of original content. As noted, this will be X's 18th 'X Originals' program, though most of the programming that it's signed up has been fairly niche. At this stage, X's Originals have been: Khloe Kardashian in her 'Khloe in Wonderland' interview show Anthony Pompliano in his business-focused program 'From the Desk of Anthony Pompliano' Paris Hilton, in a yet-to-be-announced project (which now seems to have been dumped) Tucker Carlson, whose interviews had been generating millions of views in the app (before he migrated to his own platform) Don Lemon, whose X show was canceled after he interviewed Elon Musk Tulsi Gabbard, who had been developing a series of documentary-style programs focused on U.S. politics (now seemingly dumped) Jim Rome who's still airing his show 'The Jungle' in the app WWE, which is airing a weekly 'WWE Speed' show in the app The Big 3 league of retired NBA players, which aired weekly games in the app last season (now airing on CBS) Rap battle show Verzuz, which is looking to make a comeback on X Investment-based show 'Going Public' Football docu-series 'The Offseason' NHL's '4 Nations' tournament Athlos athletic events Special docu-series 'The Art of the Surge' focusing on Trump's re-election, and 'All-In with the Boston Celtics' X is also set to air a new program with Venus and Serena Williams later in the year, which is another big-name sports program, aligning with X user interests. But as you can see, X hasn't really attracted, or held onto any major name content, and it's hard to view the platform as a 'video first' offering as yet, despite what X itself keeps saying. I had expected X to attract some bigger-name shows, using the connections of CEO Linda Yaccarino, who previously headed ad partnerships at NBCUniversal, and presumably has a huge Rolodex of connections to call up as a result. But thus far, those connections seem to only have brought in some big names from yesteryear, like Paris Hilton, and a lesser Kardashian. X had also made a splash on political content, signing up various commentators last year. But given the way that the company, and Elon Musk specifically, handled the deal with Don Lemon, that could have also spooked some big names, and kept them away from the app. But then again, on the flip side, 18 X Originals is significant, and X says that it's hosted over 300 episodes of its Originals programs over the past two years. It doesn't feel like X's video push is really gaining traction, and its far from what most would consider a 'video first' platform as yet. But video views in the app are rising, and as it continues to look for new content partners, maybe that will, eventually, yield more significant growth results for the app. Which, of course, is the key focus. Despite X's proclamations of massive innovation and progress, the app is pretty much the same as it was when Elon took over, in terms of functionality at least, and it's gradually losing users over time, while Threads continues to gain. That's seemingly pushing X to refine its programming focus onto its key areas of audience interest, as opposed to promoting political ideology, and that could help to yield better audience results, and engagement, or at the least, keep its key audience segments from drifting off to other apps. Really, this is the same lesson that Twitter learned with its past video content efforts, that it needs to hone in on what its audience wants, though even then, it could never quite get the balance right, and capitalize on its popularity as a second-screen option for major events. Will X do better on this front? Right now, it seems like it's just working to keep a hold on what it has, as the competition heats up for user engagement. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data