Latest news with #facialrecognition


The Verge
a day ago
- Business
- The Verge
xAI reportedly asked staff for 'perpetual' access to their face recordings.
xAI reportedly asked staff for 'perpetual' access to their face recordings. In April, xAI asked workers to record their facial expressions as part of efforts to train Grok to understand human emotions, according to a report from Business Insider. During the process, xAI reportedly had employees sign a form that gives the company access to their 'likeness' for training and 'inclusion in and promotion of commercial products and services.'


Daily Mail
a day ago
- Politics
- Daily Mail
AI to catch Channel migrants pretending to be children: Labour launches trial of face recognition technology
The Government is set to trial AI-powered facial recognition technology to determine whether Channel migrants are being wrongly identified as children. The Home Office today announced testing on new technology will begin later this year with the hope it could be fully integrated into the asylum system in 2026. Ministers admitted that assessing the age of asylum seekers is 'an incredibly complex and difficult task' but said AI might soon provide quick and cost-effective results. There have long been fears that some of those who cross the Channel in small boats - who often don't carry official documents such as passports - are wrongly claiming to be children. Unaccompanied minors are more likely to be granted asylum than adults, with some suspected to be faking their ages in a bid to boost their chances of staying in the UK. In the first half of 2024, a total of 1,317 migrants claiming to be minors at the border were later judged to be adults. There were a total of 2,122 age disputes raised over the same period. Currently initial age decisions are made by Home Office staff based on a migrant's physical appearance and demeanour. The previous Tory government proposed using scientific methods - such as X-rays, CT scans or MRI imaging on key parts of the body - to assess the age of asylum seekers. Powers to conduct such assessments were passed by MPs as part of the 2022 Nationality and Borders Act, but were not put into practice. Home Office minister Angela Eagle today revealed the Labour Government has now concluded using AI technology is the most 'cost-effective option'. In a written statement to Parliament, Ms Eagle - the border security and asylum minister - said: 'Accurately assessing the age of individuals is an incredibly complex and difficult task. 'The Home Office has spent a number of years analysing which scientific and technological methods would best assist the current process, including looking at the role that Artificial Intelligence (AI) technology can play. 'Since coming into office, this Government has commissioned further tests and analysis to determine the most promising methods to pursue further. 'Based on this work, we have concluded that the most cost-effective option to pursue is likely to be Facial Age Estimation, whereby AI technology – trained on millions of images where an individual's age is verifiable– is able to produce an age estimate with a known degree of accuracy for an individual whose age is unknown or disputed. 'In a situation where those involved in the age assessment process are unsure whether an individual is aged over or under 18, or do not accept the age an individual is claiming to be, Facial Age Estimation offers a potentially rapid and simple means to test their judgements against the estimates produced by the technology.' Ms Eagle noted how online retailers, social media websites and other companies were increasingly adopting AI-powered facial recognition technology as part of online age verification tests. She added: 'Early assessments suggest that Facial Age Estimation could produce workable results much quicker than other potential methods of scientific or technological age assessment, such as bone X-rays or MRI scans, but at a fraction of the cost, and with no requirement for a physical medical procedure or accompanying medical supervision.' Labour previously watered down laws, introduced by the Tories, that gave ministers the power to treat asylum seekers who refused to undergo scientific age checks as adults. The announcement on Tuesday came as the borders watchdog report into Home Office age assessments said it is 'inevitable' that some decisions will be wrong without a 'foolproof test' of chronological age. The watchdog added this is 'clearly a cause for concern, especially where a child is denied the rights and protections to which they are entitled'.


Russia Today
2 days ago
- Politics
- Russia Today
India to use AI and drones to combat crimes against women
New Delhi proposes to use artificial intelligence-based facial recognition systems, smart lighting systems, and drones to monitor high-risk areas to curb crime against women in the country, the Home Ministry told the Supreme Court on Monday. The ministry was responding to a public interest litigation filed by the Supreme Court Women Lawyers' Association over the rise in crimes against women in the country, a Hindustan Times report said. The home ministry said seven railway stations in cities such as Delhi, Mumbai and others, will soon be equipped with AI systems, a Times of India report added. Other measures such as automatic license plate recognition, smart lighting systems, and drones to monitor high risk areas will be put in place, the ministry added. The ministry told the top court that the National Data Sharing and Exchange Platform contains sensitive information, including names, addresses, photographs, and fingerprint details of individuals involved in various sexual offenses such as rape, stalking, and child abuse, the Hindustan Times report added. As of now, the database has 2.02 million entries that can be accessed by all police stations and law enforcement agencies across the country through the Inter-Operable Criminal Justice System database. This is an initiative aimed at integrating the police, courts, prisons, forensic labs, and prosecution, with the help of technology, the ministry said. The Supreme Court Women Lawyers' Association argued that further measures are necessary to combat crimes against women, as the existing steps outlined by the ministry are not making a "big difference," according to the Hindustan Times report. Data from India's National Crime Records Bureau showed an increase in crimes against women, from 5.8 million in 2018 to 6.6 million in 2022. Last week, a 20-year-old student who had repeatedly complained of sexual harassment by a senior teacher died from 90% burns after attempting self-immolation outside the principal's office, according to reports.


South China Morning Post
3 days ago
- Politics
- South China Morning Post
The future of surveillance tech is already here – in the US, not China
Out of story ideas about China? One default topic for Western hacks is to warn against the repressive nature of China's pervasive 'hi-tech' public surveillance. But a recent one in The New York Times takes the cake. Forgive the long quote, but it helps to fill up column space. It's also necessary to show the person's pathos or value system. I don't know. But here goes: 'I heard some surprising refrains on my recent travels through China. 'Leave your bags here,' a Chinese acquaintance or tour guide would suggest when I ducked off the streets into a public bathroom. 'Don't worry,' they'd say and shrug when I temporarily lost sight of my young son in the crowds. 'The explanation always followed: 'Nobody will do anything,' they'd say knowingly. Or: 'There's no crime.' And then, always: 'There are so many cameras!' I couldn't imagine such blasé faith in public safety back when I last lived in China, in 2013, but on this visit it was true: Cameras gawked from poles, flashed as we drove through intersections, lingered on faces as we passed through stations or shops.' The writer, an American, is troubled. 'I felt that I'd gotten a taste of our own American future,' she wrote. 'Wasn't this, after all, the logical endpoint of an evolution already under way in America?' Oh dear! In fact, high-resolution public security cameras with facial recognition features are so yesterday's tech. The Times article is titled, 'Can we see our future in China's cameras?' Well, no, lady, you want to see your future, go back to your own country.
Yahoo
5 days ago
- Yahoo
How AI-powered police forces watch your every move
How AI-powered police forces watch your every move Change in the criminal justice system is rarely linear. It comes in fits and starts, slowed by bureaucracy, politics, and just plain inertia. Reforms routinely get passed, then rolled back, watered down, or tied up in court. However, there is one corner of the system where change is occurring rapidly and almost entirely in one direction: the adoption of artificial intelligence. From facial recognition to predictive analytics to the rise of increasingly convincing deepfakes and other synthetic video, new technologies are emerging faster than agencies, lawmakers, or watchdog groups can keep up, The Marshall Project reports. Take New Orleans, where, for the past two years, police officers have quietly received real-time alerts from a private network of AI-equipped cameras, flagging the whereabouts of people on wanted lists, according to recent reporting by The Washington Post. Since 2023, the technology has been used in dozens of arrests, and it was deployed in two high-profile incidents this year that thrust the city into the national spotlight: the New Year's Eve terror attack that killed 14 people and injured nearly 60, and the escape of 10 people from the city jail last month. In 2022, City Council members attempted to put guardrails on the use of facial recognition, passing an ordinance that limited police use of that technology to specific violent crimes, and mandated oversight by trained examiners at a state facility. But those guidelines assume it's the police doing the searching. New Orleans police have hundreds of cameras, but the alerts in question came from a separate system: a network of 200 cameras equipped with facial recognition and installed by residents and businesses on private property, feeding video to a nonprofit called Project NOLA. Police officers who downloaded the group's app then received notifications when someone on a wanted list was detected on the camera network, along with a location. That has civil liberties groups and defense attorneys in Louisiana frustrated. 'When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there, and we don't have the tools to do what we do, which is hold people accountable,' Danny Engelberg, New Orleans' chief public defender, told the Post. Supporters of the effort, meanwhile, say it has contributed to a pronounced drop in crime in the city. The police department said it would suspend the use of the technology shortly before the Post's investigation was published. New Orleans isn't the only place where law enforcement has found a way around city-imposed limits for facial recognition. Police in San Francisco and Austin, Texas, have both circumvented restrictions by asking nearby or partnering law enforcement agencies to run facial recognition searches on their behalf, according to reporting by the Post last year. Meanwhile, at least one city is considering a new way to gain the use of facial recognition technology: by sharing millions of jail booking photos with private software companies in exchange for free access. Last week, the Milwaukee Journal-Sentinel reported that the Milwaukee police department was considering such a swap, leveraging 2.5 million photos in return for $24,000 in search licenses. City officials say they would use the technology only in ongoing investigations, not to establish probable cause. Another way departments can skirt facial recognition rules is to use AI analysis that doesn't technically rely on faces. Last month, The Massachusetts Institute of Technology Review noted the rise of a tool called 'Track,' offered by the company Veritone. It can identify people using 'body size, gender, hair color and style, clothing, and accessories.' Notably, the algorithm can't be used to track by skin color. Because the system is not based on biometric data, it evades most laws intended to restrain police use of identifying technology. Additionally, it would allow law enforcement to track people whose faces may be obscured by a mask or a bad camera angle. In New York City, police are also exploring ways to use AI to identify people not just by face or appearance, but by behavior, too. 'If someone is acting out, irrational… it could potentially trigger an alert that would trigger a response from either security and/or the police department,' the Metropolitan Transportation Authority's Chief Security Officer Michael Kemper said in April, according to The Verge. Beyond people's physical locations and movements, police are also using AI to change how they engage with suspects. In April, Wired Magazine and 404 Media reported on a new AI platform called Massive Blue, which police are using to engage with suspects on social media and in chat apps. Some applications of the technology include intelligence gathering from protesters and activists, and undercover operations intended to ensnare people seeking sex with minors. Like most things that AI is being employed to do, this kind of operation is not novel. Years ago, I covered efforts by the Memphis Police Department to connect with local activists via a department-run Facebook account for a fictional protester named 'Bob Smith.' But like many facets of emerging AI, it's not the intent that's new — it's that the digital tools for these kinds of efforts are more convincing, cheap and scalable. But that sword cuts both ways. Police and the legal system more broadly are also contending with increasingly sophisticated AI-generated material in the context of investigations and evidence in trials. Lawyers are growing worried about the potential for deepfake AI-generated videos, which could be used to create fake alibis or falsely incriminate people. In turn, this technology creates the possibility of a 'deepfake defense' that introduces doubt into even the clearest video evidence. Those concerns became even more urgent with the release of Google Gemini's hyper-realistic video engine last month. There are also questions about less duplicitous uses of AI in the courts. Last month, an Arizona court watched an impact statement of a murder victim, generated with AI by the man's family. The defense attorney for the man convicted in the case has filed an appeal, according to local news reports, questioning whether the emotional weight of the synthetic video influenced the judge's sentencing decision. This story was produced by The Marshall Project, a nonpartisan, nonprofit news organization that seeks to create and sustain a sense of national urgency about the U.S. criminal justice system, and reviewed and distributed by Stacker. Solve the daily Crossword