logo
#

Latest news with #xAI

Why tech billionaires want bots to be your BFF
Why tech billionaires want bots to be your BFF

Mint

time2 hours ago

  • Business
  • Mint

Why tech billionaires want bots to be your BFF

Next Story Tim Higgins , The Wall Street Journal In a lonely world, Elon Musk, Mark Zuckerberg and even Microsoft are vying for affection in the new 'friend economy.' Illustration: Emil Lendof/WSJ, iStock. Gift this article Grok needs a reboot. Grok needs a reboot. The xAI chatbot apparently developed too many opinions that ran counter to the way the startup's founder, Elon Musk, sees the world. The recent announcement by Musk—though decried by some as '1984"-like rectification—is understandable. Big Tech now sees the way to differentiate artificial-intelligence offerings by creating the perception that the user has a personal relationship with it. Or, more weirdly put, a friendship—one that shares a similar tone and worldview. The race to develop AI is framed as one to develop superintelligence. But in the near term, its best consumer application might be curing loneliness. That feeling of disconnect has been declared an epidemic—with research suggesting loneliness can be as dangerous as smoking up to 15 cigarettes a day. A Harvard University study last year found AI companions are better at alleviating loneliness than watching YouTube and are 'on par only with interacting with another person." It used to be that if you wanted a friend, you got a dog. Now, you can pick a billionaire's pet product. Those looking to chat with someone—or something—help fuel AI daily active user numbers. In turn, that metric helps attract more investors and money to improve the AI. It's a virtuous cycle fueled with the tears of solitude that we should call the 'friend economy." That creates an incentive to skew the AI toward a certain worldview—as right-leaning Musk appears to be aiming to do shortly with Grok. If that's the case, it's easy to imagine an AI world where all of our digital friends are superfans of either MSNBC or Fox News. In recent weeks, Meta Platforms chief Mark Zuckerberg has garnered a lot of attention for touting a stat that says the average American has fewer than three friends and a yearning for more. He sees AI as a solution and talks about how consumer applications will be personalized. 'I think people are gonna want a system that gets to know them and that kind of understands them in a way that their feed algorithms do," he said during a May conference. Over at Microsoft, the tech company's head of AI, Mustafa Suleyman has also been talking about the personalization of AI as the key to differentiation. 'We really want it to feel like you're talking to someone who you know really well, that is really friendly, that is kind and supportive but also reflects your values," he said during an April appearance on the Big Technology Podcast. Still, he added, Microsoft wants to impose boundaries that keep things safe. 'We don't really want to engage in any of the chaos," Suleyman said. 'The way to do that, we found, is that it just stays reasonably polite and respectful, super-even handed, it helps you see both sides of an argument." With all of that in mind, it comes as little surprise that the current crop of chatbots are designed to sound like you're having a conversation with another human. This has resulted in lots of pearl clutching. There are academics warning about the dangers of users falling in love with AI and individual anecdotes of users going overboard in their use. The idea of a heartbroken man falling in love with his chatbot isn't new, of course. It was the plot of the 2013 film 'Her" starring Joaquin Phoenix and Scarlett Johansson, who voiced the AI. (OpenAI's own AI voice, which sounded eerily similar to Johansson's last year, triggered a public fight with the actress.) Still, we—a society that's only a generation into a world of social media—have some recent experience in how algorithms can have unintended consequences. They create echo chambers that reinforce worldviews rather than expand them. The news of Musk's Grok reboot is an example of that. Grok and xAI were born, in part, out of Musk's concerns about AI being developed by companies he called woke. He pointed to Google results that seemed overly concerned with being politically correct rather than factually accurate. With xAI, Musk aimed to create what he called a 'maximum truth-seeking AI." Yet when an influential X user known as Catturd lambasted Grok's answers in June, Musk took quick note. 'When Grok first came out—it was awesome—now Grok is just woke fake news that repeats liberal talking points," Catturd posted on June 18. 'I know this for a fact because everything Grok says about me is a lie. What a joke Grok has become." Another X user asked Grok why it lied about Catturd. 'I aim to provide accurate, neutral responses based on available data," the chatbot responded on X. 'Claims that I lie about @catturd2 may stem from disagreements with my sourcing, like Media Matters or Rolling Stone, which document his misinformation. I don't have personal biases, but AI training can introduce unintended slants. Some report I censor criticism or push fringe views, while others note I fact-check conservative claims accurately. Bias in AI is complex, and neutrality is tough to achieve. If you have specific examples of lies, please share, and I'll clarify with evidence." Musk wasn't impressed. 'Your sourcing is terrible," Musk replied. 'Only a very dumb AI would believe [Media Matters] and [Rolling Stone]! You are being updated this week." He later said xAI would retrain the AI on data created with an updated version of Grok, 'which has advanced reasoning" that would be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors." After all, nobody wants a friend who is always spouting the wrong crazy stuff. Write to Tim Higgins at Topics You May Be Interested In Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Egypt Independent

time3 hours ago

  • Business
  • Egypt Independent

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. Concerns about Grok following Musk's views For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. What it would take to re-train Grok It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' A Grok account on X is displayed on a phone screen. Jakub Porzycki/NurPhoto/Shutterstock Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Bias in AI Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

Oracle Benefits From AI Cloud Service Adoption: A Sign of More Upside?
Oracle Benefits From AI Cloud Service Adoption: A Sign of More Upside?

Globe and Mail

timea day ago

  • Business
  • Globe and Mail

Oracle Benefits From AI Cloud Service Adoption: A Sign of More Upside?

Oracle 's ORCL collaboration with xAI to deploy its Grok models via the Oracle Cloud Infrastructure ('OCI') is expected to further drive momentum in the company's cloud services and license support revenues in the near term. OCI spans domains like compute, databases and AI services, providing end-to-end solutions for enterprise workloads. Oracle's 23 AI database has helped businesses integrate AI solutions by automating workflows and providing flexibility to tailor models to business-specific needs. Oracle had previously demonstrated its ability to host, train and scale various models through partnerships with Cohere, LLAMA 2 and NVIDIA AI Enterprise. The latest partnership with xAI is expected to significantly increase OCI compute, storage and network usage. Additionally, the seamless deployment of Grok models is likely to encourage long-term contract renewals. Looking forward, total cloud revenues are expected to grow 26-30% for the first quarter of 2026, and by over 40% for fiscal 2026. The company also noted that for fiscal 2026, cloud infrastructure revenues are projected to grow even more than 70%, up from 51% in the prior year. The Competitive Landscape: ORCL's Fight With the Giants Oracle faces tough competition from players like Alphabet GOOGL and Amazon AMZN. Alphabet's Google Cloud invests heavily in custom AI chips called TPUs and recently introduced its latest version, Ironwood. The company recently launched the 'Cloud Wide Area Network,' making Google's private cloud network globally accessible. The recent inclusion of Gemini 2.5 and Flash into Alphabet's Vertex AI Platform is expected to transform enterprise-AI sophistication. Amazon's cloud services are deployed via AWS. Amazon's AI applications like Alexa+ and its latest AI chip, Trainium 2, are delivering massive improvements in performance and efficiency. Amazon's Bedrock recently integrated Anthropic's Claude 3.7 Sonnet and Meta's Llama 4 model family to build high-quality Generative AI applications. The company recently launched a preview version of Amazon Nova Act, designed to help developers break down complex tasks and perform commands in web browsers. ORCL's Share Price Performance, Valuation and Estimates ORCL's shares have appreciated 27.7% in the year-to-date period, outperforming both the Zacks Computer and Technology sector's return of 5.5% and the Zacks Computer-Software industry's appreciation of 14.6%. Oracle's shares have also outperformed Alphabet and Amazon in the year-to-date period. While shares of GOOGL dropped 8.4%, AMZN plunged 1.0%. ORCL's YTD Price Performance Image Source: Zacks Investment Research Oracle trades at a three-year EV/EBITDA of 26.53X, substantially above the Zacks Computer-Software industry average of 19.86X. ORCL has a Value Score of D. ORCL's Valuation The Zacks Consensus Estimate for ORCL's fiscal 2026 revenues is currently pegged at $66.63 billion, indicating 16.08% year-over-year growth. The consensus mark for 2026 earnings is pegged at $6.71 per share, up 1.05% over the past 30 days. This indicates an 11.28% increase from the figure reported in the year-ago quarter. ORCL currently carries a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Zacks' Research Chief Picks Stock Most Likely to "At Least Double" Our experts have revealed their Top 5 recommendations with money-doubling potential – and Director of Research Sheraz Mian believes one is superior to the others. Of course, all our picks aren't winners but this one could far surpass earlier recommendations like Hims & Hers Health, which shot up +209%. See Our Top Stock to Double (Plus 4 Runners Up) >> Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Inc. (AMZN): Free Stock Analysis Report Oracle Corporation (ORCL): Free Stock Analysis Report Alphabet Inc. (GOOGL): Free Stock Analysis Report This article originally published on Zacks Investment Research (

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

CNN

timea day ago

  • Business
  • CNN

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

CNN

timea day ago

  • Business
  • CNN

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store