Latest news with #ChiOnwurah


The Guardian
22-07-2025
- Business
- The Guardian
UK government urged to offer more transparency over OpenAI deal
Ministers are facing calls for greater transparency about public data that may be shared with the US tech company OpenAI after the government signed a wide-ranging agreement with the $300m (£222m) company which critics compared to letting a fox into a henhouse. Chi Onwurah, the chair of the House of Commons select committee on science, innovation and technology, warned that Monday's sweeping memorandum of understanding between OpenAI's chief executive, Sam Altman, and the technology secretary, Peter Kyle, was 'very thin on detail' and called for guarantees that public data will remain in the UK and clarity about how much of it OpenAI will have access to. The deal paves the way for the Silicon Valley firm behind ChatGPT to explore deploying advanced AI technology in areas including justice, defence and security, and education. It includes OpenAI and the government 'partnering to develop safeguards that protect the public and uphold democratic values'. Kyle said he wants Britain to be 'front and centre when it comes to developing and deploying AI' and 'this can't be achieved without companies like OpenAI'. But the deal has also led to concerns. Onwurah said: 'We want assurance that there will be transparency over what public data OpenAI will have access to for training and that it will remain in the UK and within the UK's data protection framework and legislation. It's important for public trust that the government is more transparent about how this relationship will work. The public is certainly not convinced that the tech giants are on their side or that AI is on their side. They need to have confidence that the government is on their side.' She cited 'major failures' in public sector IT procurement including the Post Office Horizon scandal and said: 'We hope and expect that the government has learned the lessons of previous failed technology procurement in its relationship with OpenAI and other AI companies it is bringing into the public sector.' The department for science, innovation and technology has been approached for comment. The deal with OpenAI comes after an agreement this month with Google to provide free technology to the public sector from the NHS to local councils and to upskill tens of thousands of civil servants in technology, including AI. Other Silicon Valley companies already working in the UK public sector include Anduril, a US military technology company that provides AI-enabled 'kill web' systems. It has been working with the British military. Google, Amazon, Microsoft and Palantir were among technology companies that attended a meeting last month with the justice secretary, Shabana Mahmood, at which ideas were suggested to insert tracking devices under offenders' skin and assign robots to contain prisoners. The latest agreement includes OpenAI's possible participation in the government's plan for 'AI Growth Zones' which could see huge datacentres built around the UK. Altman said the agreement would enable the UK government to realise the potential of its AI policy by 'turning ambition to action and delivering prosperity for all'. But Martha Dark, the executive director of Foxglove, a campaign group for fairer technology, called the level of detail 'hopelessly vague'. 'The British government has a treasure trove of public data that would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT,' she said. 'This is yet more evidence of this government's credulous approach to Big Tech's increasingly dodgy sales pitch. Peter Kyle seems bizarrely determined to put the Big Tech fox in charge of the henhouse when it comes to UK sovereignty.' Sameer Vuyyuru, the chief AI and product officer at Capita, another provider of AI services to the public sector, said there was now 'a complete acknowledgment that AI plays a role in the future of public services'. But he said there was a gap between their desire for the efficiency savings and understanding how best to procure AI services. 'The public sector is viewed as one of the most fertile areas for the implementation of AI,' he said, adding that fertility meant radically increased public sector efficiency as well as revenue growth for providers. He said AI agents would typically operate on, rather than take ownership of, public data. While AI use is now 'miniscule', he said up to 50% of often 'mind-numbing and menial' public service tasks could benefit from AI. This could mean cutting waiting times for renewing a driving licence, applying to join the army or applying for tuition subsidies by speeding up the number of cases a civil servant could process from 10 a day to 30 or even 50 with the assistance of an AI agent.


Telegraph
15-07-2025
- Business
- Telegraph
Scammers target struggling graduates with fake job ads
Scammers are targeting struggling university graduates and school leavers with fake jobs adverts on social media amid a slump in entry-level roles. Fraudsters are turning to Instagram and TikTok to deceive young people by impersonating popular job boards to steal their data and money. JobsAware, a non-profit organisation that provides free employment advice, said it had received 120 reports of scammers using TikTok and Instagram to deceive graduates in the year to July, a significant surge compared with just 13 a year earlier. Keith Rosser, chairman of JobsAware, said: 'Over the past three years we've seen this explosion in the use of TikTok, WhatsApp and other [social media platforms] to really scale this fraud.' Compared with traditional jobs boards, social media sites push content in front of users rather than relying on someone seeking out information about a role. There has also been a significant rise in the number of jobs scams on WhatsApp, JobsAware said, with 412 reports of fake roles recorded on the messaging platform during 2024/5 compared with 161 reports the year prior. Dame Chi Onwurah, chairman of the parliamentary science, innovation and tech committee, said: 'It's concerning to see reports that fraudsters are using social media to scam young people who are just looking for jobs. 'My committee has found that the UK's current online safety regime is woefully insufficient to keep users safe online.' The rise in job scams comes amid a challenging labour market for university graduates and school leavers, as companies cut back on recruiting for entry-level roles following Rachel Reeves's £25bn National Insurance (NI) raid. Graduate job postings in the 12 months to June are down 33pc compared with a year earlier, according to Indeed jobs site. The UK's labour market has cooled significantly in recent months as a growing number of businesses freeze hiring in response to the Chancellor's changes to NI paid by employers, which took effect in April. The number of vacancies in the three months to May fell to 781,000, according to the Office for National Statistics, the lowest level since the pandemic. Mr Rosser of JobsAware said the fall in vacancies and a rise in the number of people seeking a second job or additional income was also impacting scams. He added: 'There's almost a greater need or desperation for people to get work and fishing in a smaller pool as it were… I think it definitely leads people to a place where the higher the need and urgency, the more susceptible they are to be scammed.' Over the past year, JobsAware had reported a rise in scammers using AI to make their fake job adverts and documents look increasingly realistic. Mr Rosser said 'It could be phoney contracts of employment … even fake ID documents to prove who the company is. What we found is that AI is being utilised by fraudsters to make the deception look more convincing.' TikTok said it only allows job adverts from companies that are officially registered with the Financial Conduct Authority. Meta, which owns Facebook, Instagram and WhatsApp, declined to comment. A government spokesman said: 'Under the Online Safety Act, all platforms must now proactively tackle illegal fraudulent material, including false representation and scam ads which accounts post directly and promote on user's feeds. 'Once further codes are in force, major platforms will also have to clamp down on traditional paid-for adverts.'


The Guardian
11-07-2025
- Politics
- The Guardian
Social media incentivised spread of Southport misinformation, MPs say
Social media business models endangered the public by incentivising the spread of dangerous misinformation after the 2024 Southport murders, MPs have concluded, adding that current online safety laws have 'major holes'. The Commons science and technology select committee called for new multimillion-pound fines for platforms that do not set out how they will tackle the spread of harmful content through their recommendation systems. The MPs warned that rapid advances in generative artificial intelligence, which allows for the creation of convincing fake videos, could make the next misinformation crisis 'even more dangerous' than last August's violent protests after three children were killed by a man wrongly identified online as an asylum seeker who had arrived by small boat. They also called for AI-generated content to be visibly labelled and said divisive and deceptive content amplified on social media after the attacks may have been part of a foreign disinformation operation. 'It's clear that the Online Safety Act [OSA] just isn't up to scratch,' said Chi Onwurah, the committee chair, after a seven-month inquiry. 'The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn't cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable.' Neither misinformation nor disinformation are harms that firms need to address under the OSA, which only received royal assent less than two years ago. State-sponsored disinformation can amount to an offence of foreign interference. The report examines the role of platforms including X, Facebook and TikTok, and comes after this week's opening of a public inquiry into missed opportunities to prevent the killing of Bebe King, six, Elsie Dot Stancombe, seven, and Alice da Silva Aguiar, nine, on 29 July last year. Just over two hours after the first call to the emergency services, a post on X claimed the suspect was a 'Muslim immigrant', and within five hours a false name, 'Ali al-Shakati', was circulating on the same platform, the MPs found. Within a day, these two posts had received more than 5m views. In fact, the attacker was Axel Rudakubana, a British citizen born in Cardiff. Another X post that evening calling for violence towards asylum hostels received more than 300,000 views, and the next day the false name was on X's 'Trending in the UK' list. TikTok suggested to users under its 'others searched for' function the words 'Ali al-Shakti arrested in Southport', and by the end of the day after the attack social media posts with the false name had accrued 27m impressions and violence had broken out outside Southport mosque. On 3 and 4 August a Facebook post called for violence against the Britannia hotel in Leeds, where many occupants were asylum seekers. The committee called for fines of at least £18m if platforms do not set out how they will tackle significant harms that derive from content promoted by their recommendation systems even if it is not illegal. It concluded: 'The act fails to keep UK citizens safe from a core and pervasive online harm.' It called on the government to make social media platforms 'identify and algorithmically deprioritise factchecked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.' But it stressed: 'It is vital that these measures do not censor legal free expression.' The MPs called on ministers to extend regulatory powers to tackle social media advertising systems that allow 'the monetisation of harmful and misleading content', with penalties rising depending on severity and the proceeds used to support victims of online harms. The Department for Science, Innovation and Technology has been approached for comment. Ofcom said it held platforms to account over illegal content but stressed that the scope of laws requiring platforms to tackle legal but harmful content was a matter for the government and parliament. A spokesperson said: 'Technology and online harms are constantly evolving, so we're always looking for ways to make life online safer. We're proposing stronger protections including asking platforms to do more on recommender systems and to have clear protocols for responding to surges in illegal content during crises.' TikTok said its community guidelines prohibited inaccurate, misleading or false content that may cause significant harm and it worked with factcheckers and made any content that could not be verified as accurate ineligible for its 'for you' feed. X and Meta were approached for comment.


The Guardian
10-07-2025
- Politics
- The Guardian
Social media incentivised spread of Southport misinformation, MPs say
Social media business models endangered the public by incentivising the spread of dangerous misinformation after the 2024 Southport murders, MPs have concluded, adding that current online safety laws have 'major holes'. The Commons science and technology select committee called for new multimillion-pound fines for platforms that do not set out how they will tackle the spread of harmful content through their recommendation systems. The MPs warned that rapid advances in generative artificial intelligence, which allows for the creation of convincing fake videos, could make the next misinformation crisis 'even more dangerous' than last August's violent protests after three children were killed by a man wrongly identified online as an asylum seeker who had arrived by small boat. They also called for AI-generated content to be visibly labelled and said divisive and deceptive content amplified on social media after the attacks may have been part of a foreign disinformation operation. 'It's clear that the Online Safety Act [OSA] just isn't up to scratch,' said Chi Onwurah, the committee chair, after a seven-month inquiry. 'The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn't cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable.' Neither misinformation nor disinformation are harms that firms need to address under the OSA, which only received royal assent less than two years ago. State-sponsored disinformation can amount to an offence of foreign interference. The report examines the role of platforms including X, Facebook and TikTok, and comes after this week's opening of a public inquiry into missed opportunities to prevent the killing of Bebe King, six, Elsie Dot Stancombe, seven, and Alice da Silva Aguiar, nine, on 29 July last year. Just over two hours after the first call to the emergency services, a post on X claimed the suspect was a 'Muslim immigrant', and within five hours a false name, 'Ali al-Shakati', was circulating on the same platform, the MPs found. Within a day, these two posts had received more than 5m views. In fact, the attacker was Axel Rudakubana, a British citizen born in Cardiff. Another X post that evening calling for violence towards asylum hostels received more than 300,000 views, and the next day the false name was on X's 'Trending in the UK' list. TikTok suggested to users under its 'others searched for' function the words 'Ali al-Shakti arrested in Southport', and by the end of the day after the attack social media posts with the false name had accrued 27m impressions and violence had broken out outside Southport mosque. On 3 and 4 August a Facebook post called for violence against the Britannia hotel in Leeds, where many occupants were asylum seekers. The committee called for fines of at least £18m if platforms do not set out how they will tackle significant harms that derive from content promoted by their recommendation systems even if it is not illegal. It concluded: 'The act fails to keep UK citizens safe from a core and pervasive online harm.' It called on the government to make social media platforms 'identify and algorithmically deprioritise factchecked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.' But it stressed: 'It is vital that these measures do not censor legal free expression.' The MPs called on ministers to extend regulatory powers to tackle social media advertising systems that allow 'the monetisation of harmful and misleading content', with penalties rising depending on severity and the proceeds used to support victims of online harms. The Department for Science, Innovation and Technology has been approached for comment. Ofcom said it held platforms to account over illegal content but stressed that the scope of laws requiring platforms to tackle legal but harmful content was a matter for the government and parliament. A spokesperson said: 'Technology and online harms are constantly evolving, so we're always looking for ways to make life online safer. We're proposing stronger protections including asking platforms to do more on recommender systems and to have clear protocols for responding to surges in illegal content during crises.' TikTok said its community guidelines prohibited inaccurate, misleading or false content that may cause significant harm and it worked with factcheckers and made any content that could not be verified as accurate ineligible for its 'for you' feed. X and Meta were approached for comment.


Daily Mail
20-06-2025
- Politics
- Daily Mail
QUENTIN LETTS: Something ominous was in the air, and possibly soon in your veins...
The assisted dying vote was reported at half past two. 'Unlock!' said Speaker Hoyle, and his voice went all strangulated. Had someone slipped Mr Speaker a lethal dose? It was that sort of a day. Jangling. Something ominous in the air. And possibly soon in your veins. Four hours' talk of death made for an incongruous Friday this flaming June. Outside, the blessings of creation twinkled under a blue sky. Inside the chamber, MPs anguished over death-bed agonies and the prospect, some feared, of disabled or anorexic patients being hastened to their Maker. The state would now 'exercise power over life and death', said Tom Tugendhat (Con, Tonbridge). Supporters of the Bill heckled him. But he was only reflecting the reality if this Bill is passed by the Lords. The Upper House may disagree. The majority of 23 felt slender. Brexit had a majority of over a million and the Lords did its best to kibosh that. Chi Onwurah (Lab, Newcastle C) noted that private companies, as well as the state, would now be able 'to kill citizens'. My dears, we're going private for Grandpa. So much quicker, and they'll play Vivaldi's Four Seasons to muffle the sound of his death rattle. Ms Onwurah's was one of three or four speeches that appeared to start with one position and concluded with the opposite. The debate drifted like seaweed. A strong speech for choice from Kit Malthouse (Con, NW Hants) would be balanced by an affecting plea from Jen Craft (Lab, Thurrock) to think of pressure being placed on disabled people. Ms Craft has a daughter with Down's syndrome. Kim Leadbeater (Lab, Spen Valley) was her usual chirpy self as she moved her private Bill. She bounced about, grinned exhaustingly and said 'this is a robust process!' and 'take back control of your dying days!' Death by exclamation mark. There was a dissonance between her bleak obsession and this Butlin's redcoat persona. Ken Dodd playing an undertaker. One eloquent supporter of her Bill was Peter Prinsley (Lab, Bury St Edmunds), a doctor with 45 years' experience. He and John McDonnell (Ind, Hayes & Harlington) lent welcome age to that side of the argument. Others throbbed with the certitude of youth and, one fears, the naivety of new MPs yet to learn how officialdom mangles noble legislative intent. A former NHS manager, Lewis Atkinson (Lab, Sunderland C), insisted hospitals would cope. They always say that. More persuasive support for the Bill came from an intensive-care nurse, Sittingbourne's Kevin McKenna. He had trust in doctors. Do you? After so many NHS scandals? 'I wouldn't put my life, or the life of someone dear to me, in the hands of a panel of officials,' grunted Diane Abbott (Lab, Hackney N). Three times she spoke of 'the vulnerable and marginalised'. But Hanover-born Wera Hobhouse (Lib Dem, Bath) was indignant that constituents had told her that MPs were too stupid to care for the vulnerable. 'Ve haf to educate people!' fulminated Frau Hobhouse. Sarah Olney (Lib Dem, Richmond Park), shouting like a Sergeant Major, attacked the Bill's workability. Her colleague Luke Taylor (Sutton & Cheam), not the nimblest of orators, gripped a text of his speech tightly with his thick fingers and deplored 'the status crow'. It was a matter of 'how one might exit this earthly realm', he averred, more Mr Pooter than John Betjeman. James Cleverly, in the Man From Del Monte's suit, kept touching his heart as he feared money would be diverted from elsewhere in the NHS. We kept hearing the term 'a fundamental change'. When relations were bumped off, would suspicion be seeded? Mark Garnier (Con, Wyre Forest) was pro the Bill but admitted: 'I'm not the world's greatest legislator.' Oh. The most troubling speech came from a vet, Neil Hudson (Con, Epping Forest). Having killed many animals, he reported that 'the final act doesn't always go smoothly or according to plan'. He 'shuddered to think' what would happen when an assisted death turned messy.