Latest news with #iCare
Yahoo
6 days ago
- Business
- Yahoo
Clover Health (CLOV) Announced a New Pharmacy Pilot Program in New Jersey
Clover Health Investments, Corp. (NASDAQ:CLOV) is one of the . On July 9, Clover Health Investments, Corp. (NASDAQ:CLOV) announced a new pharmacy pilot program in New Jersey. The new program is being launched in partnership with IPC Digital Health, which connects independent community pharmacies across the state. The program is aimed at helping seniors. Management noted that local pharmacies in the iCare+ network will be the backbone of this initiative. As these pharmacies already know their communities and will use new technology and virtual services to assist patients, especially those with chronic health conditions. An older Medicare-eligible consumer smiling happily while receiving healthcare services at a clinic. A key part of the program is using real-time tools, powered by Clover Health Investments, Corp. (NASDAQ:CLOV)'s data and AI. These tools will monitor if prescriptions are filled and help spot when patients miss doses. Pharmacists will work closely with doctors and care teams to ensure seniors always get the right medicine at the right time, close to where they live. Clover Health Investments, Corp. (NASDAQ:CLOV) is a technology company that helps improve healthcare for people on Medicare, especially seniors. While we acknowledge the potential of CLOV as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio


Fast Company
18-06-2025
- Fast Company
4 principles for using AI to spot abuse—without making it worse
Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people—including children in foster care, adults in nursing homes, and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs. Developers are using natural language processing, for example—a form of AI that interprets written or spoken language—to try to detect patterns of threats, manipulation, and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most 'at risk' for abuse. When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier. But as a social worker with 15 years of experience researching family violence —and five years on the front lines as a foster-care case manager, child abuse investigator, and early childhood coordinator—I've seen how well-intentioned systems often fail the very people they are meant to protect. Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements—not faces or voices—to detect physical violence. I'm grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm? New tech, old injustice Many AI tools are trained to 'learn' by analyzing historical data. But history is full of inequality, bias, and flawed assumptions. So are people, who design, test, and fund AI. That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families' risk levels—scores given to hotline staff to help them screen calls—would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%. Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as 'aggressive' at a significantly higher rate than Standard American English—up to 62% more often, in certain contexts. Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress. These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems—sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation, and family separation compared with white families, even after accounting for income and other socioeconomic factors. Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers. Surveillance over support Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost. In hospitals and eldercare facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors, and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy. In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months—overwhelming staff and missing at least one real incident. The program's accuracy did 'not achieve a level that would be considered acceptable to staff and management,' according to the independent report. Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian, and Securly are marketed as tools to keep students safe. Such programs can be installed on students' devices to monitor online activity and flag anything concerning. But they've also been shown to flag harmless behaviors—like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality. Other systems use classroom cameras and microphones to detect 'aggression.' But they frequently misidentify normal behavior like laughing, coughing, or roughhousing—sometimes prompting intervention or discipline. These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans—data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in Automating Inequality, AI systems risk scaling up these long-standing harms. Care, not punishment I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I've developed a framework of four key principles for what I call 'trauma-responsive AI.' Survivor control: People should have a say in how, when, and if they're monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help. Human oversight: Studies show that combining social workers' expertise with AI support improves fairness and reduces child maltreatment —as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate. Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM's AI Fairness 360, Google's What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models. Privacy by design: Technology should be built to protect people's dignity. Open-source tools like Amnesia, Google's differential privacy library, and Microsoft's SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people's identities in video or photo data. Honoring these principles means building systems that respond with care, not punishment. Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development—from needs assessments to user testing and ethical oversight. Legislation is important, too. On May 5, 2025, for example, Montana's governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.
Yahoo
13-06-2025
- Yahoo
Protecting the vulnerable, or automating harm? AI's double-edged role in spotting abuse
Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people – including children in foster care, adults in nursing homes and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs. Developers are using natural language processing, for example — a form of AI that interprets written or spoken language – to try to detect patterns of threats, manipulation and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most 'at risk' for abuse. When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier. But as a social worker with 15 years of experience researching family violence – and five years on the front lines as a foster-care case manager, child abuse investigator and early childhood coordinator – I've seen how well-intentioned systems often fail the very people they are meant to protect. Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements – not faces or voices – to detect physical violence. I'm grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm? Many AI tools are trained to 'learn' by analyzing historical data. But history is full of inequality, bias and flawed assumptions. So are people, who design, test and fund AI. That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families' risk levels – scores given to hotline staff to help them screen calls – would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%. Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as 'aggressive' at a significantly higher rate than Standard American English — up to 62% more often, in certain contexts. Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress. These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems — sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation and family separation compared with white families, even after accounting for income and other socioeconomic factors. Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers. Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost. In hospitals and elder-care facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy. In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months – overwhelming staff and missing at least one real incident. The program's accuracy did 'not achieve a level that would be considered acceptable to staff and management,' according to the independent report. Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian and Securly are marketed as tools to keep students safe. Such programs can be installed on students' devices to monitor online activity and flag anything concerning. But they've also been shown to flag harmless behaviors – like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality. Other systems use classroom cameras and microphones to detect 'aggression.' But they frequently misidentify normal behavior like laughing, coughing or roughhousing — sometimes prompting intervention or discipline. These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans — data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in 'Automating Inequality,' AI systems risk scaling up these long-standing harms. I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I've developed a framework of four key principles for what I call 'trauma-responsive AI.' Survivor control: People should have a say in how, when and if they're monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help. Human oversight: Studies show that combining social workers' expertise with AI support improves fairness and reduces child maltreatment – as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate. Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM's AI Fairness 360, Google's What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models. Privacy by design: Technology should be built to protect people's dignity. Open-source tools like Amnesia, Google's differential privacy library and Microsoft's SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people's identities in video or photo data. Honoring these principles means building systems that respond with care, not punishment. Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development – from needs assessments to user testing and ethical oversight. Legislation is important, too. On May 5, 2025, for example, Montana's governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling. As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Aislinn Conrad, University of Iowa Read more: Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns Is using AI tools innovation or exploitation? 3 ways to think about the ethics Healing from child sexual abuse is often difficult but not impossible Aislinn Conrad is developing iCare, an AI-powered, real-time violence detection system.

IOL News
18-05-2025
- Health
- IOL News
How family breakdowns and poverty fuel homelessness
South Africa's growing homeless crisis includes thousands of vulnerable children. Many flee broken homes, only to find hunger, abuse, and fear on the streets instead. In South Africa's city centres, including Durban, a growing crisis sees children living on the streets due to family breakdown, abuse, poverty, and systemic failures. Many flee unsafe homes and find little support from under-resourced schools and overwhelmed government systems. Once on the streets, they face hunger, violence, and exploitation. According to Statistics South Africa, the number of homeless people in the country more than quadrupled between 1996 and 2022,from 13,135 to 55,719. Among them, children under 18 account for about 7%, highlighting the growing vulnerability of minors in this crisis. While smaller in proportion, homeless children face significantly higher risks due to their age, dependency, and lack of protection. The 2022 census revealed that 45.7% of homeless children under 15 were living in shelters, while 26.9% were found in abandoned buildings or vehicles, reflecting unsafe and unstable conditions. 'From our experience at iCare, the most common reasons include family breakdown, domestic violence, neglect, poverty, and substance abuse within the home,' said Anne Slatter, director at the Durban-based shelter iCare. 'Many children flee toxic or unsafe environments, while others are orphaned or abandoned.' Clinical and sports psychologist Dr Keitumetse 'Tumi' Mashego added, 'When there is a lot of trauma, dysfunction, toxicity, abuse or even perceived unfairness in the family, children can be negatively impacted and sadly some run away from home due to the inability to cope with the family dynamic.' Once on the streets, survival becomes a daily struggle. 'They enter survival mode,' Mashego explained. 'There are issues of safety, hunger, keeping alive, and they could be targeted or bullied by other homeless people ,it becomes a 'survival of the fittest' game.' The school system, ideally a safety net, often fails these children. 'School disengagement is both a cause and a consequence of life on the streets,' said Slatter. 'Once children drop out, they lose routine, structure, and hope for the future.' Mashego highlighted the lack of psychosocial support in schools: 'Ideally there should be social workers available to schools to screen, identify and intervene as early as possible. But unfortunately, it's not happening in the schooling systems for varying reasons, often failing the vulnerable.'


Al Etihad
15-05-2025
- Health
- Al Etihad
ECAE launches iCare initiative to encourage parental engagement in classroom learning
15 May 2025 16:05 ABU DHABI (ALETIHAD)The Emirates College for Advanced Education (ECAE) announced on Thursday the launch of iCare, a new initiative designed to strengthen parental involvement in children's education across UAE schools. Aligned with the aspirations of the UAE's Year of Community, iCare promotes stronger social connections by encouraging active collaboration between parents and schools, emphasising the essential role families play in student success and holistic development.A series of interactive workshops, structured activities, and accessible resources, iCare equips parents with the knowledge and tools needed to support their children's learning at home. By bridging the gap between families and schools, the initiative creates a supportive and stimulating environment that enhances academic achievement, promotes emotional well-being, and encourages lifelong learning. With a comprehensive approach to engagement, iCare empowers parents to play a proactive role in their children's learning journey, reinforcing the vital connection between home and May Laith Al Taee, Vice Chancellor of ECAE, said: 'Education is a shared responsibility that extends beyond the classroom, shaping the foundation of a strong and connected society." She added, "iCare embodies the spirit of the UAE's Year of Community by empowering parents to take an active role in their children's education, strengthening social bonds, and fostering a culture of collaboration and collective growth. By equipping parents with essential skills, strategies, and resources, the initiative ensures an inclusive learning environment that not only supports academic success but also addresses students' psychological and emotional well-being, providing them with the right guidance at the right time.'The initiative includes workshops and training sessions focused on positive parenting, academic support, and stress management. Parents will gain insights into motivation, discipline, communication strategies, and practical techniques to reinforce learning at addition to training, iCare offers community-based programmes that make learning an interactive and engaging experience. Family Science and Math Nights provide hands-on STEM activities to spark curiosity and engagement, while the Reading Together initiative encourages parents and children to explore books together, encouraging literacy and critical storytelling and cultural exchange events create a platform for families to share diverse traditions, enhancing multicultural understanding. The Community Learning Hub, an online resource centre, further extends support by providing parents with educational materials, guidance, and best practices to help their children encouraging active parental participation, iCare aims to boost student motivation, improve academic performance, and strengthen community ties. It serves as a sustainable model for long-term parental engagement, ensuring that families remain key partners in their children's education. The ECAE's launch of iCare reaffirms its commitment to fostering a dynamic and inclusive educational environment where students receive the support, encouragement, and resources needed to excel. Through this initiative, the college continues to champion the role of families in shaping a strong, knowledge-driven future for the UAE. Source: Aletihad - Abu Dhabi