logo
Head Start Ban: Trump Administration Says Undocumented Immigrants Blocked From Program

Head Start Ban: Trump Administration Says Undocumented Immigrants Blocked From Program

Forbes10-07-2025
The Department of Health and Human Service on Thursday announced a ban on undocumented immigrants having access to Head Start and other federal programs, in the Trump administration's latest move to discourage illegal immigration.
Health and Human Services Secretary Robert F. Kennedy Jr. speaks before U.S. Agriculture Secretary ... More Brooke Rollins signs three new SNAP food choice waivers for the states of Idaho, Utah, and Arkansas on June 10, 2025 in Washington.On Thursday, the Department of Health and Human Services announced it officially rescinded a Clinton administration interpretation of the 1996 Personal Responsibility and Work Opportunity Reconciliation Act, which extended some federal public programs to undocumented immigrants.
HHS Secretary Robert F. Kennedy Jr. said the government will no longer 'incentivize illegal immigration' by providing certain federal programs to undocumented immigrants.
Programs blocked for undocumented immigrants as a result of the policy include Head Start, substance abuse treatment, mental health services, homelessness transition services and Title X family planning.
Head Start did not previously require documentation of immigrant status as an enrollment requirement.
Head Start is a federal early-childhood program that provides education, health and family services to low-income children and families. The program is aimed at improving school readiness and serves children from prenatal through age 5. Head Start has served over 40 million children. Key Background
Trump issued an executive order in February directing federal agencies to crack down on undocumented immigrants' access to programs subsidized by taxpayers. Undocumented immigrants paid $55.8 billion in federal taxes and $33.9 billion in state and local taxes in 2023, according to the American Immigration Council. Some of the programs that HHS is restricting access to, such as Head Start, haven't historically determined program eligibility based on immigration status.
'This decision undermines the fundamental commitment that the country has made to children and disregards decades of evidence that Head Start is essential to our collective future,' said Yasmina Vinci, the executive director of the National Head Start Association, in a statement. 'Head Start programs strive to make every child feel welcome, safe, and supported, and reject the characterization of any child as 'illegal.''
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.
FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

CNN

timea few seconds ago

  • CNN

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work. 'The AI revolution has arrived,' Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months. 'We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,' he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable. Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals. Yet behind the scenes, the agency's slick AI project has been greeted with a shrug — or outright alarm. Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates. But it has also made up nonexistent studies, known as AI 'hallucinating,' or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said. 'Anything that you don't have time to double-check is unreliable. It hallucinates confidently,' said one employee — a far cry from what has been publicly promised. 'AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have' to check for fake or misrepresented studies, a second FDA employee said. Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That's because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information. All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine. 'The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,' the FDA said in a statement on its launch in June. But speaking to CNN at the FDA's White Oak headquarters this week, Makary says that right now, most of the agency's scientists are using Elsa for its 'organization abilities' like finding studies and summarizing meetings. The FDA's head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies. 'Elsa is no different from lots of [large language models] and generative AI,' he told CNN. 'They could potentially hallucinate.' Walsh also said Elsa's shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries. Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI. 'I have not heard those specific concerns, but it's optional,' he said. 'They don't have to use Elsa if they don't find it to have value.' Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: 'You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.' The earliest iterations of Elsa were built from the backbone of an earlier AI model that the FDA had started to work on during the Biden administration, according to two sources familiar with the matter. The name was initially an acronym for Efficient Language System for Analysis and was among several pitches for names for the AI system, like 'RegulAItor.' Elsa eventually won out, though leadership ultimately decided against its longer title: A recent internal document seen by CNN says that now 'Elsa is just a name and is not an acronym.' Walsh and his team demonstrated the AI tool for CNN this week. The platform has a plain white interface with some brown accents. It welcomes the user with 'How can I help you?' above an entry field that says 'Ask Elsa anything,' much like other popular publicly used AI. The FDA has said that Elsa is designed to let regulators tap into secure internal documents, shortening reviews by quickly summarizing risky side effects and pulling in information about related products. During the demonstration, Elsa was asked to summarize the FDA's guidance on fatty liver disease and medicines that treat it. It pulled up the 10 papers from an internal FDA library that it said were the most relevant. When it was adopted in June, Makary boasted that Elsa's rollout had come 'ahead of schedule and under budget' after 'a very successful pilot program with FDA's scientific reviewers.' Walsh said those efforts came together in a matter of weeks. The agency leadership chose staff from across its various centers overseeing drugs, devices, food and animal medicines for a series of meetings in May. There, they gave feedback about what they needed from such a tool, potential challenges they saw and even some aesthetic choices, like Elsa's color palette and its name, according to an FDA employee who participated. Those who participated in the feedback meetings were dubbed Elsa 'champions' and sent to evangelize the platform in their various corners of the agency, with talking points and suggestions about how to demonstrate its use, according to two current FDA staff. Agency training on Elsa is voluntary, as is using the platform at all. Makary and Walsh told CNN that more than half of FDA staff have logged time in Elsa. But those who spoke to CNN said that the adoption has been weak in their areas of the agency — not many of their colleagues are using Elsa, or they are using it only on a very limited basis. Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies. If Elsa gives a one-paragraph summary of, say, 20 pages of research tied to a particular new drug, there is no way to know whether it misrepresents something or misses something that a human reviewer would have considered important, one FDA employee said. There is no way for Elsa to know what information from a lengthy study could be the most crucial for an expert, this employee believes. When Elsa is told it is incorrect — that a study it cites does not exist or that someone works at the FDA when they don't — it is usually 'apologetic,' one employee said. But in at least one instance shared with CNN — when that employee asked Elsa to generate something for a project — it insisted that research area was not in FDA's purview (it was). Employees who spoke to CNN have tested Elsa's knowledge by asking it questions like how many drugs of a certain class are authorized for children to use or how many drugs are approved with a certain label . In both cases, it returned wrong answers. One employee described Elsa miscounting the number of products with a particular label. When told it was wrong, the AI admitted that it made a mistake. 'But it still doesn't help you to answer the question,' that employee said. The algorithm then reminds users that it is only an AI assistant and they need to verify its work. Asked about errors, in addition to the hallucinations, Walsh said: 'Some of those responses don't surprise me at all. But what's important is … how we address those gaps in the capability' of Elsa and its users. Those include trainings and new features like the personal document libraries that will launch soon, he added. Walsh also said that a current feature of Elsa, where users can click over its summaries to see which parts of a document Elsa has cited, can act as a check to make sure it did not fabricate a study. However, this now applies only when Elsa is being used to pull internal documents. As of now, it cannot link to, for example, articles in a medical journal. And knowing whether those sources are, in fact, the most important is also up to the user and how they ask the questions, Walsh said. He also contended that the problem of Elsa's hallucinations can be mitigated by asking it more precise questions. Elsa is also improving, he insists. 'We're also seeing as the AI models get better, right, feedback gets better.' Talk of integrating artificial intelligence into US health agencies' work had been underway for some time before the second Trump administration jump-started efforts, but the speed with which Elsa came into use was unusual. Some experts would pinpoint the government's efforts to develop AI plans in earnest in 2018, when the Pentagon began evaluating its potential for national security. Part of that project was about looking into its use in health care too, said Dr. Hassan Tetteh, a thoracic surgeon and former US Navy captain who worked on the project in 2020. There were also early efforts from that Pentagon-led group to talk with international allies about AI standards and regulations, he added. In Europe, countries have worked together to stand up AI safeguards. In 2024, the European Union approved and implemented the AI Act, a law 'to protect fundamental rights, democracy, the rule of law' around risky AI use, including in health care, while promoting transformational AI models. These standards and protections do not exist in the US. A government working group formed during the Biden administration to look at establishing regulations on AI use, including in health care, was disbanded last year: It's mandate expired and was not renewed. Elsa arrived as Congress wrestled with how to approach laws on AI regulation. Although congressional committees have held hearings about AI risks like biased models and cybersecurity threats, Congress has passed no substantial legislation to regulate AI. In June, a bipartisan group of House members introduced legislation mostly focused on maintaining US dominance in the AI race; later that month, two senators introduced a bill trained on preventing American use of 'adversarial' AI from foreign governments, including China. Other efforts, such as a bill that would require testing and regulatory oversight for high-risk AI systems (much like the European standards), have stalled. An earlier version of the 'One Big Beautiful Bill,' President Donald Trump's expansive tax and spending bill, would have included Congress' first sweeping law on AI: a 10-year moratorium on the enforcement of state regulations of the technology. But the Senate struck the provision down. Trump, who has made AI development and investments a top priority in his second administration, has heralded a bright future for the technology. At an energy summit in Pennsylvania last week, he told attendees: 'We're here today because we believe that America's destiny is to dominate every industry and be the first in every technology, and that includes being the world's number one superpower in artificial intelligence.' Without federal regulations, it is hard to say what that superpower would look like. 'AI does a lot of stuff, but it's not magic,' said Dr. Jonathan Chen, an assistant professor of medicine at Stanford University who has studied the use of AI in clinical settings. It would be great if it could help experts sniff out data falsification or give rigorous analysis on patient safety, but 'those problems are much more nuanced' than what a machine can do, he said. 'It's really kind of the Wild West right now. The technology moves so fast, it's hard to even comprehend exactly what it is.'

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.
FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

CNN

timea minute ago

  • CNN

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work. 'The AI revolution has arrived,' Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months. 'We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,' he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable. Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals. Yet behind the scenes, the agency's slick AI project has been greeted with a shrug — or outright alarm. Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates. But it has also made up nonexistent studies, known as AI 'hallucinating,' or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said. 'Anything that you don't have time to double-check is unreliable. It hallucinates confidently,' said one employee — a far cry from what has been publicly promised. 'AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have' to check for fake or misrepresented studies, a second FDA employee said. Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That's because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information. All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine. 'The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,' the FDA said in a statement on its launch in June. But speaking to CNN at the FDA's White Oak headquarters this week, Makary says that right now, most of the agency's scientists are using Elsa for its 'organization abilities' like finding studies and summarizing meetings. The FDA's head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies. 'Elsa is no different from lots of [large language models] and generative AI,' he told CNN. 'They could potentially hallucinate.' Walsh also said Elsa's shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries. Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI. 'I have not heard those specific concerns, but it's optional,' he said. 'They don't have to use Elsa if they don't find it to have value.' Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: 'You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.' The earliest iterations of Elsa were built from the backbone of an earlier AI model that the FDA had started to work on during the Biden administration, according to two sources familiar with the matter. The name was initially an acronym for Efficient Language System for Analysis and was among several pitches for names for the AI system, like 'RegulAItor.' Elsa eventually won out, though leadership ultimately decided against its longer title: A recent internal document seen by CNN says that now 'Elsa is just a name and is not an acronym.' Walsh and his team demonstrated the AI tool for CNN this week. The platform has a plain white interface with some brown accents. It welcomes the user with 'How can I help you?' above an entry field that says 'Ask Elsa anything,' much like other popular publicly used AI. The FDA has said that Elsa is designed to let regulators tap into secure internal documents, shortening reviews by quickly summarizing risky side effects and pulling in information about related products. During the demonstration, Elsa was asked to summarize the FDA's guidance on fatty liver disease and medicines that treat it. It pulled up the 10 papers from an internal FDA library that it said were the most relevant. When it was adopted in June, Makary boasted that Elsa's rollout had come 'ahead of schedule and under budget' after 'a very successful pilot program with FDA's scientific reviewers.' Walsh said those efforts came together in a matter of weeks. The agency leadership chose staff from across its various centers overseeing drugs, devices, food and animal medicines for a series of meetings in May. There, they gave feedback about what they needed from such a tool, potential challenges they saw and even some aesthetic choices, like Elsa's color palette and its name, according to an FDA employee who participated. Those who participated in the feedback meetings were dubbed Elsa 'champions' and sent to evangelize the platform in their various corners of the agency, with talking points and suggestions about how to demonstrate its use, according to two current FDA staff. Agency training on Elsa is voluntary, as is using the platform at all. Makary and Walsh told CNN that more than half of FDA staff have logged time in Elsa. But those who spoke to CNN said that the adoption has been weak in their areas of the agency — not many of their colleagues are using Elsa, or they are using it only on a very limited basis. Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies. If Elsa gives a one-paragraph summary of, say, 20 pages of research tied to a particular new drug, there is no way to know whether it misrepresents something or misses something that a human reviewer would have considered important, one FDA employee said. There is no way for Elsa to know what information from a lengthy study could be the most crucial for an expert, this employee believes. When Elsa is told it is incorrect — that a study it cites does not exist or that someone works at the FDA when they don't — it is usually 'apologetic,' one employee said. But in at least one instance shared with CNN — when that employee asked Elsa to generate something for a project — it insisted that research area was not in FDA's purview (it was). Employees who spoke to CNN have tested Elsa's knowledge by asking it questions like how many drugs of a certain class are authorized for children to use or how many drugs are approved with a certain label . In both cases, it returned wrong answers. One employee described Elsa miscounting the number of products with a particular label. When told it was wrong, the AI admitted that it made a mistake. 'But it still doesn't help you to answer the question,' that employee said. The algorithm then reminds users that it is only an AI assistant and they need to verify its work. Asked about errors, in addition to the hallucinations, Walsh said: 'Some of those responses don't surprise me at all. But what's important is … how we address those gaps in the capability' of Elsa and its users. Those include trainings and new features like the personal document libraries that will launch soon, he added. Walsh also said that a current feature of Elsa, where users can click over its summaries to see which parts of a document Elsa has cited, can act as a check to make sure it did not fabricate a study. However, this now applies only when Elsa is being used to pull internal documents. As of now, it cannot link to, for example, articles in a medical journal. And knowing whether those sources are, in fact, the most important is also up to the user and how they ask the questions, Walsh said. He also contended that the problem of Elsa's hallucinations can be mitigated by asking it more precise questions. Elsa is also improving, he insists. 'We're also seeing as the AI models get better, right, feedback gets better.' Talk of integrating artificial intelligence into US health agencies' work had been underway for some time before the second Trump administration jump-started efforts, but the speed with which Elsa came into use was unusual. Some experts would pinpoint the government's efforts to develop AI plans in earnest in 2018, when the Pentagon began evaluating its potential for national security. Part of that project was about looking into its use in health care too, said Dr. Hassan Tetteh, a thoracic surgeon and former US Navy captain who worked on the project in 2020. There were also early efforts from that Pentagon-led group to talk with international allies about AI standards and regulations, he added. In Europe, countries have worked together to stand up AI safeguards. In 2024, the European Union approved and implemented the AI Act, a law 'to protect fundamental rights, democracy, the rule of law' around risky AI use, including in health care, while promoting transformational AI models. These standards and protections do not exist in the US. A government working group formed during the Biden administration to look at establishing regulations on AI use, including in health care, was disbanded last year: It's mandate expired and was not renewed. Elsa arrived as Congress wrestled with how to approach laws on AI regulation. Although congressional committees have held hearings about AI risks like biased models and cybersecurity threats, Congress has passed no substantial legislation to regulate AI. In June, a bipartisan group of House members introduced legislation mostly focused on maintaining US dominance in the AI race; later that month, two senators introduced a bill trained on preventing American use of 'adversarial' AI from foreign governments, including China. Other efforts, such as a bill that would require testing and regulatory oversight for high-risk AI systems (much like the European standards), have stalled. An earlier version of the 'One Big Beautiful Bill,' President Donald Trump's expansive tax and spending bill, would have included Congress' first sweeping law on AI: a 10-year moratorium on the enforcement of state regulations of the technology. But the Senate struck the provision down. Trump, who has made AI development and investments a top priority in his second administration, has heralded a bright future for the technology. At an energy summit in Pennsylvania last week, he told attendees: 'We're here today because we believe that America's destiny is to dominate every industry and be the first in every technology, and that includes being the world's number one superpower in artificial intelligence.' Without federal regulations, it is hard to say what that superpower would look like. 'AI does a lot of stuff, but it's not magic,' said Dr. Jonathan Chen, an assistant professor of medicine at Stanford University who has studied the use of AI in clinical settings. It would be great if it could help experts sniff out data falsification or give rigorous analysis on patient safety, but 'those problems are much more nuanced' than what a machine can do, he said. 'It's really kind of the Wild West right now. The technology moves so fast, it's hard to even comprehend exactly what it is.'

Taiwan says trade delegation in Washington for talks on potential tariff and trade deal
Taiwan says trade delegation in Washington for talks on potential tariff and trade deal

Yahoo

time28 minutes ago

  • Yahoo

Taiwan says trade delegation in Washington for talks on potential tariff and trade deal

TAIPEI (Reuters) -Taiwan's government said on Wednesday that a trade delegation led by the vice premier was in Washington, D.C., for a new round of in-person negotiations with U.S. officials this week. U.S. President Donald Trump has proposed imposing tariffs of as much as 32% on Taiwan. No new tariffs have yet been announced for the democratically-governed island, although the 90-day pause on worldwide tariffs Trump proposed in April has already expired. The delegation, led by Vice Premier Cheng Li-chun, seeks to safeguard Taiwan's industrial interests, public health, and food security, according to a cabinet statement. The talks aim to promote balanced trade, and improve the overall economic and trade framework between the two sides, it added. "The team will continue working under the principles of protecting Taiwan's industries and public welfare,' the statement said. 'We hope to optimise the trade system and lay the groundwork for a stronger partnership in the future.' The Taiwan talks come as trade negotiations in the region accelerate. On Wednesday, the United States and Japan announced a trade agreement that includes a 15% U.S. import tariff on all Japanese goods, lower than the 25% Washington had proposed previously. The Japan deal is seen as one of the most significant among several agreements reached ahead of the August 1 tariff deadline the White House set after the original 90-day deadline expired with only a few successfully negotiated agreements. Taiwan has been seeking to strengthen its trade ties with major partners, particularly the U.S., Taiwan's second-largest trading partner after China, amid growing geopolitical and economic challenges. The outcome of the negotiations could play a key role in shaping the island's future trade strategy and its position in the global supply chain, and is crucial to Taiwan's export-driven economy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store