logo
4 principles for using AI to spot abuse—without making it worse

4 principles for using AI to spot abuse—without making it worse

Fast Company18-06-2025
Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people—including children in foster care, adults in nursing homes, and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs.
Developers are using natural language processing, for example—a form of AI that interprets written or spoken language—to try to detect patterns of threats, manipulation, and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most 'at risk' for abuse.
When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier.
But as a social worker with 15 years of experience researching family violence —and five years on the front lines as a foster-care case manager, child abuse investigator, and early childhood coordinator—I've seen how well-intentioned systems often fail the very people they are meant to protect.
Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements—not faces or voices—to detect physical violence. I'm grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm?
New tech, old injustice
Many AI tools are trained to 'learn' by analyzing historical data. But history is full of inequality, bias, and flawed assumptions. So are people, who design, test, and fund AI.
That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families' risk levels—scores given to hotline staff to help them screen calls—would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%.
Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as 'aggressive' at a significantly higher rate than Standard American English—up to 62% more often, in certain contexts.
Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress.
These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems—sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation, and family separation compared with white families, even after accounting for income and other socioeconomic factors.
Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers.
Surveillance over support
Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost.
In hospitals and eldercare facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors, and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy.
In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months—overwhelming staff and missing at least one real incident. The program's accuracy did 'not achieve a level that would be considered acceptable to staff and management,' according to the independent report.
Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian, and Securly are marketed as tools to keep students safe. Such programs can be installed on students' devices to monitor online activity and flag anything concerning.
But they've also been shown to flag harmless behaviors—like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality.
Other systems use classroom cameras and microphones to detect 'aggression.' But they frequently misidentify normal behavior like laughing, coughing, or roughhousing—sometimes prompting intervention or discipline.
These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans—data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in Automating Inequality, AI systems risk scaling up these long-standing harms.
Care, not punishment
I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I've developed a framework of four key principles for what I call 'trauma-responsive AI.'
Survivor control: People should have a say in how, when, and if they're monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help.
Human oversight: Studies show that combining social workers' expertise with AI support improves fairness and reduces child maltreatment —as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate.
Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM's AI Fairness 360, Google's What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models.
Privacy by design: Technology should be built to protect people's dignity. Open-source tools like Amnesia, Google's differential privacy library, and Microsoft's SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people's identities in video or photo data.
Honoring these principles means building systems that respond with care, not punishment.
Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development—from needs assessments to user testing and ethical oversight.
Legislation is important, too. On May 5, 2025, for example, Montana's governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AMD Stock Gets Bullish Re-Rating as Export Ban Relief Sparks $800M Upside
AMD Stock Gets Bullish Re-Rating as Export Ban Relief Sparks $800M Upside

Yahoo

time30 minutes ago

  • Yahoo

AMD Stock Gets Bullish Re-Rating as Export Ban Relief Sparks $800M Upside

Advanced Micro Devices, Inc. (NASDAQ:) is one of the . On July 30, Susquehanna analyst Christopher Rolland raised the price target on the stock to $210 from $135 and kept a 'Positive' rating on the shares. After previewing Q2 results, the analysts anticipate generally in-line to slightly better results on the back of stronger PC sales. This is because Intel acknowledged that tariff-related pull-ins continued in 2Q. The firm has also highlighted that AMD boasts server business strength. It has been gaining market share in EPYC data center CPUs. A major factor behind the upgraded outlook on AMD is that the government announced in July that it would review and likely renew AMD's license application for exporting MI308 AI chips to China. A close up of a complex looking PCB board with several intergrated semiconductor parts. Provided this occurs, it would help in reversing the previously anticipated $1.5 billion negative revenue impact from China restrictions. As of now, the firm expects AMD to recover an estimated $800 million in revenue during the second half of 2025 from the potential China sales. It also highlighted that much of the $800 million in MI308 inventory that AMD was planning to write down may now be sold at near-zero cost. Advanced Micro Devices, Inc. (NASDAQ:AMD) develops and sells semiconductors, processors, and GPUs for data centers, gaming, AI, and embedded applications. While we acknowledge the potential of AMD as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 Must-Watch AI Stocks on Wall Street and Disclosure: None. Sign in to access your portfolio

'The More Value You Provide, The More Money You Can Earn,' How This Self-Made Millionaire Employed 'One-To-Infinity' Leverage, Growing Her Net Worth
'The More Value You Provide, The More Money You Can Earn,' How This Self-Made Millionaire Employed 'One-To-Infinity' Leverage, Growing Her Net Worth

Yahoo

time30 minutes ago

  • Yahoo

'The More Value You Provide, The More Money You Can Earn,' How This Self-Made Millionaire Employed 'One-To-Infinity' Leverage, Growing Her Net Worth

For years, Rose Han made her money linearly. She toiled away at her corporate job hour after hour, bringing home regular paychecks. The money was good, enough to help her tackle six figures of student loans and begin investing, but not enough to live the lifestyle Han desired. So she began to explore the idea of "leveraged income." "That's a completely different mentality that we don't learn in school,' Han told Business Insider. "Leverage is the explanation behind any significant wealth creation, no matter who you look at." Don't Miss: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can Accredited Investors: Grab Pre-IPO Shares of the AI Company Powering Hasbro, Sephora & MGM— Han, who now owns a financial literacy company, broke down the idea of leveraged income this way. A personal trainer who works one-on-one with clients has zero leverage. 'You show up, trade hours for dollars, and you get paid," she told the outlet. Trainers who run group fitness classes and can work with multiple clients at once have what she calls "one-to-many" leverage. "Now they're serving 10 people at once and therefore making about 10 times more,' she said. The goal, though, is "one-to-infinity" leverage. This could look like a personal trainer who builds an app that can be downloaded by an endless number of people. Essentially, you do the work once and are able to reap the benefits for years to come. "That concept really was the key that I unlocked," she said. For Han, the jump to "one-to-infinity" leverage didn't happen overnight. Trending: $100k+ in investable assets? – no cost, no obligation. She started her business in the basement of a co-working space, where she hosted regular classes based on her own experiences. 'I was just learning a lot on my financial awakening journey, so I wanted to share it,' she told Business Insider. 'In the back of my mind, I thought, 'OK, maybe there's some way I could make this lucrative,' but that's not the goal.' After two years, she started posting some of her classes on YouTube. 'The idea that a video could reach millions of people, 24/7, for the rest of my life and even after, that was really just wild to me,' Han said. 'I was skeptical because I'd never gone on camera. It was scary."These days, her YouTube channel has nearly 1 million subscribers and has evolved into an entire company with online courses, brand deals, affiliate links, and even a book deal. "I worked a lot to create that course, and it didn't just happen in seven days, but I created something once that I could sell over and over and over and serve a lot of people. I created a lot of value with something, and so I got paid in that exponential way," she told Business Insider. Those looking to emulate Han's success should start by asking themselves one question, she says: What value can I provide? 'I fully believe that the more value you provide, the more you can earn,' Han told the outlet. 'And, if you think creatively enough, there's no limit to how much value you can provide and therefore how much money you can earn.' Read Next: Here's what Americans think you need to be considered wealthy. Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article 'The More Value You Provide, The More Money You Can Earn,' How This Self-Made Millionaire Employed 'One-To-Infinity' Leverage, Growing Her Net Worth originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Palo Alto Networks (PANW) to Acquire CyberArk in $25 Billion AI Security Deal
Palo Alto Networks (PANW) to Acquire CyberArk in $25 Billion AI Security Deal

Yahoo

time30 minutes ago

  • Yahoo

Palo Alto Networks (PANW) to Acquire CyberArk in $25 Billion AI Security Deal

Palo Alto Networks, Inc. (NASDAQ:PANW) is one of the . On July 30, Reuters reported that Palo Alto Networks has agreed to buy Israeli peer CyberArk Software for an estimated $25 billion, its biggest deal yet. CEO Nikesh Arora strives to develop a comprehensive cyber security provider to capitalize on AI-driven demand. The cash-and-stock deal is one of the largest technology deals of the year. By acquiring CyberArk, Palo Alto will be able to broaden its cybersecurity services, adding identity security tools, and enhancing its appeal to large enterprise customers. A young professional in a workspace with a laptop demonstrating the company's anti-virus security & privacy protection services. 'The rise of AI and the explosion of machine identities have made it clear that the future of security must be built on the vision that every identity requires the right level of privilege controls.' -Arora said in a statement. Palo Alto Networks, Inc. (NASDAQ:PANW) is a leader in AI-powered cybersecurity. While we acknowledge the potential of PANW as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 Must-Watch AI Stocks on Wall Street and Disclosure: None.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store