logo
#

Latest news with #AIbias

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

CNN

time4 days ago

  • Business
  • CNN

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

What is AI bias?
What is AI bias?

Finextra

time03-06-2025

  • Business
  • Finextra

What is AI bias?

0 This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The term 'AI bias' refers to situations whereby an artificial intelligence (AI) system produces prejudiced results, as a consequence of flaws in its machine learning process. Often, AI bias mirrors society's inequalities, be they around race, gender, class, nationality, and so on. In this instalment of Finextra's Explainer series, we ask where bias in AI originates, the consequence of feeding models skewed data, and how the risks can be mitigated. The sources of bias AI bias can develop in three key areas, namely: 1. The data fed to the model If the data used to train an AI is not representative of the real world, and contains existing societal biases, the model will ingrain these prejudices and perpetuate them in its decisioning. 2. The algorithms The very design of an AI algorithm can also introduce bias and misrepresentation. Some algorithms may overstate or understate patterns in data – resulting in skewed predictions. 3. The programmed objectives The goals that an AI system is programmed to achieve can also be biased. If objectives are not designed with fairness and equity in mind, the engine may discriminate against certain groups. Loan applications: A case study So, what relevance does AI bias have to the financial services industry? Tools powered by AI are currently being rolled out by financial institution (FI)s across the globe. Indeed, AI is being deployed to automate operations; detect fraud; manage economic risk; trade algorithmically; design personalised products; support data analytics and reporting; and even improve customer services. This technology is embedding itself in bank operations, and our dependency on it will only become deeper. It is vital for the integrity of our financial systems, therefore, that AI bias is identified, understood, and controlled. For example, in the world of loan applications AI-powered approval systems are increasingly being leveraged to streamline banks' back-office processes. In some cases, loans have been denied to individuals from certain socioeconomic backgrounds, as a result of bias baked into AI models' data or algorithms. In 2021, an investigation by The Markup found lenders were more likely to deny mortgages to people of colour than to white people with similar financial characteristics. Indeed, AI-powered mortgage approval systems meant that 80% of black applicants were more likely to be rejected, along with 40% of Latinos, and 70% of Native Americans. Unchecked AI bias and its consequences Failing to address AI bias is not just discriminatory for the end-users of financial services. It can also result in legal liabilities, reputational damage, financial and operational risks, as well as regulatory non-compliance, for the institutions. The European Union (EU)'s AI Act compels providers of AI systems to ensure their training, validation, and testing datasets stand up to an appropriate examination of biases and correction measures. Failure to meet this instruction could trigger penalties and fines. Operational and ethical issues aside, allowing bias in AI to fester is simply bad for business. Indeed, bias can reduce the overall accuracy and effectiveness of AI tools – hindering their potential to deliver on the outcomes they were designed to. Mitigating the risks The fuel of today's industrial revolution, also known as Industry 4.0, is data – and the locomotive that it powers is the intelligent system. If the fuel is not refined (and bias-free) it will damage both AI's ability to run efficiently and consumers' trust in the technology itself. Though bias in datasets may never be entirely erased, it is incumbent on AI providers and the institutions that deploy it to mitigate the risks. Banks must strive for data diversity to ensure training data is representative. Algorithm design must be reviewed to guarantee processes are fair and equitable, and all AI systems should become transparent and explainable, so that any flaws can be ironed out. Supporting banks' efforts with these challenges are bias detection and mitigation tools, which help to flag and remedy cases of AI bias, as and when they appear.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store