logo
#

Latest news with #AIethics

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

CNN

time2 days ago

  • Business
  • CNN

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

CNN

time2 days ago

  • Business
  • CNN

Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image

Last week, Grok, the chatbot from Elon Musk's xAI, replied to a user on X who asked a question about political violence. It said more political violence has come from the right than the left since 2016. Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'

UN agency pushes AI ethics standards at Bangkok event as US-China tech rivalry deepens
UN agency pushes AI ethics standards at Bangkok event as US-China tech rivalry deepens

South China Morning Post

time2 days ago

  • Business
  • South China Morning Post

UN agency pushes AI ethics standards at Bangkok event as US-China tech rivalry deepens

A United Nations agency is rallying policymakers, non-government organisations and academics to support its ethics guidelines on artificial intelligence (AI) at a time when the technology is rapidly changing the world. Unesco, the 194-member UN heritage agency that produced the world's first – and so far only – global AI ethics standards four years ago, hosted a forum in Bangkok this week to drive the adoption of its recommendations. However, there is a long way to go before the recommendations could be turned into a universal, actionable framework amid an intensifying AI race between the US and China , according to analysts. At the opening on Wednesday of the third Unesco Global Forum on the Ethics of AI, Unesco director general Audrey Azoulay called for collaboration among governments, businesses and civil society to come up with an international solution. 'That is what Unesco is working to provide – preparing the world for AI and preparing AI for the world, ensuring it serves the common good,' she said. The message comes as hopes are dimming for a global consensus on AI ethics. A bipartisan group of US lawmakers introduced a bill in both chambers of Congress to ban the federal use of China-linked AI tools such as DeepSeek, in the latest sign of hostility in the tech rivalry between the world's two largest economies. A DeepSeek display seen during the Global Developer Conference on February 22, 2025 in Shanghai. Photo: VCG via Getty Images Meanwhile, the world's largest AI companies, from US-based OpenAI and Google to China's DeepSeek, were absent from the forum, which attracted more than 1,000 participants and 35 government ministers, mainly from Asia-Pacific, Africa and Latin America. When asked how other countries would respond to the divisions in the AI world, Wisit Wisitsora-At, Permanent Secretary at the Thai Ministry of Digital Economy and Society, said Thailand would not take sides in the US-China competition, adding that it would try to develop its own AI ecosystem.

40% of Gen Z men are using AI to cheat at work
40% of Gen Z men are using AI to cheat at work

Fast Company

time5 days ago

  • Business
  • Fast Company

40% of Gen Z men are using AI to cheat at work

Gen Z, the youngest generation of workers, is embracing artificial intelligence at the office. Still, according to a new survey, while AI may help with productivity and automating tasks, the technology also has big impacts on mental health and it allows for some sneaky work behavior—with men being the worst offenders. Resume Genius recently surveyed 1,000 full-time Gen Z employees on how they're using AI at work and how they feel about its place at the office. Overall, 60% said AI helps them work faster and with less effort, and 56% said the technology improved the accuracy and quality of their work. Meanwhile, 42% believed AI had helped them get new opportunities on the job. The dark side of AI Using AI isn't all sunshine and roses—or, rather, whirlwind productivity. There are personal drawbacks: 37% of respondents said they feel replaceable, and 18% said they could no longer perform their tasks without AI and would have to quit their jobs if it was banned. AI is also taking a hefty toll on mental health: 23% of Gen Zers believe the technology is negatively impacting their mental health, while 39% said that the constant updates that come with AI are 'burning them out.' Nearly half of Gen Z workers (49%) believed that the technology could lead to unfair biases. Alarmingly, Gen Z employees are also using AI in ways that are unethical. Almost a third (31%) have used AI in ways that they know break company policies, including sharing internal data. A staggering 39% said they use AI to automate tasks without their manager's permission, and 14% said they are doing so often or always. And nearly one-third of workers (30%) are straight-up using AI to generate fake work in an effort to look more productive. The gender gap widens There's a large gender gap with AI use: 71% of Gen Z men surveyed said they used AI to prioritize tasks and organize their schedule, compared with 48% of Gen Z women. Meanwhile, 69% of Gen Z men said they use AI to check their work or to get feedback, versus 48% of Gen Z women. With great use also comes more opportunity for wrongdoing. More men are using AI and more men are also using AI to cheat: 40% of Gen Z men said they have passed AI-generated work off as their own. Only 20% of Gen Z women said the same. Interestingly, men also feel less secure at work. Over 40% of Gen Z men said they worried AI could take their jobs. Only 33% of Gen Z women said the same. And while 23% of Gen Z men said they couldn't do their jobs without AI, only 14% of Gen Z women felt similarly. 'It's clear that AI is becoming an everyday support system for many Gen Z professionals,' said Eva Chan, a career expert at Resume Genius. 'But it's also becoming their go-to solution when they don't know what to say or do, and how to handle tough situations. The concern is when workers start outsourcing not just tasks, but their judgment, confidence, and even their voice. If we're not careful, we could see a generation that struggles to make decisions without AI hand-holding.'

Pope Leo says AI threatens humanity and ‘poses challenges to human dignity, justice and labor'
Pope Leo says AI threatens humanity and ‘poses challenges to human dignity, justice and labor'

Yahoo

time5 days ago

  • Business
  • Yahoo

Pope Leo says AI threatens humanity and ‘poses challenges to human dignity, justice and labor'

Pope Leo XIV has issued a stark warning about artificial intelligence, declaring it a threat to humanity that demands urgent global action including stringent regulations on Big Tech. 'Today, the church offers its trove of social teaching to respond to another industrial revolution and to innovations in the field of artificial intelligence that pose challenges to human dignity, justice and labor,' Leo told a roomful of cardinals in the Vatican in one of his first major addresses as pontiff. Leo's comments, which were delivered during his first formal audience with the College of Cardinals in the Synod Hall of the Vatican on May 10, were reported by the Wall Street Journal. The Vatican this week is hosting executives from firms including IBM, Cohere, Anthropic and Palantir for a major summit on AI ethics. Leo is expected to issue a written message but has not yet held private meetings with tech CEOs. Microsoft President Brad Smith is expected to meet Vatican officials later this month, and Google is in discussions for a future audience with the pope. By 2040, artificial intelligence is projected to automate or significantly transform 50% to 60% of jobs globally, with some estimates suggesting up to 80% could be impacted by 2050. McKinsey forecasts that 30% of US jobs could be automated by 2030, while Goldman Sachs estimates up to 300 million jobs worldwide — about 25% of the global labor force — may be affected. Labor-intensive roles like construction, maintenance, and skilled trades are expected to remain the most resilient. Just days into his papacy, the first American pope made clear that grappling with AI will be central to his agenda. In naming himself after Pope Leo XIII — the 19th-century 'Pope of the Workers' — Leo XIV signaled a direct link between the upheavals of the industrial era and today's digital revolution. The 267th pope is positioning himself as a moral counterweight to tech companies that have spent years courting the Vatican. The Church under both Francis and now Leo has advocated for legally binding global regulations to rein in unchecked AI development. 'Leo XIV wants the worlds of science and politics to immediately tackle this problem without allowing scientific progress to advance with arrogance, harming those who have to submit to its power,' Cardinal Giuseppe Versaldi told the Journal. The push for AI oversight continues the work of Pope Francis, who became increasingly vocal in his later years about the dangers of emerging technologies. Francis, who once joked he barely knew how to use a computer, gradually evolved into a leading voice on the topic — warning of a 'technological dictatorship' and calling AI 'fascinating and terrifying.' In 2020, the Vatican published the 'Rome Call for AI Ethics,' backed by Microsoft and IBM, among others. It urged developers to design AI systems that respect privacy, human rights and non-discrimination. But some tech giants, including Google and OpenAI, have so far declined to endorse it. Francis' involvement grew after the infamous AI-generated image of him in a white puffer jacket went viral in 2023, demonstrating the potential for AI to distort reality. He later cautioned world leaders that 'choices by machines' must not replace human decision-making. Now, Pope Leo — who holds a mathematics degree and a deeper familiarity with tech than his predecessor — is expected to take the Church's advocacy a step further. Vatican officials and clergy see a moral imperative to act as a global conscience in the face of what they view as a potentially dehumanizing force. 'These tools shouldn't be demonized, but they need to be regulated,' said Cardinal Versaldi. 'The question is, who will regulate them? It's not credible for them to be regulated by their makers. There needs to be a superior authority.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store