
The FDA Launches Its Generative-AI Tool, Elsa, Ahead Of Schedule
The FDA's tool—nicknamed Elsa—is designed to assist employees with everything from scientific reviews to basic operations. Originally, the FDA planned to launch by June 30, so Elsa is well ahead of schedule and under budget, according to an FDA statement.
It's not clear what exact information Elsa was trained on, but the FDA says that it didn't use any 'data submitted by regulated industry' in order to protect sensitive research and information. Currently, Elsa houses its information in GovCloud, an Amazon Web Services product specifically intended for classified information.
As a language model, Elsa can help employees with reading, writing, and summarizing. In addition, the FDA said that it can summarize adverse events, generate code for nonclinical applications, and more. Per the agency, Elsa is already being used to 'accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.'
In a May press release announcing the completion of the FDA's first AI-assisted scientific review, Makary said he was 'blown away' by Elsa's capabilities, which '[hold] tremendous promise in accelerating the review time for new therapies'. He added, 'We need to value our scientists' time and reduce the amount of non-productive busywork that has historically consumed much of the review process.'
According to one scientist, Jinzhong Liu, the FDA's generative AI completed tasks in minutes that would otherwise take several days. In Tuesday's announcement, FDA Chief AI Officer Jeremy Walsh said, 'Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee.'
Generative AI can certainly be a useful tool, but every tool has its drawbacks. With AI specifically, there has been an uptick in stories about hallucinations which are outright false or misleading claims and statements. Although commonly associated with chatbots like ChatGPT, hallucinations can still pop up in federal AI models, where they can unleash even more chaos.
Per IT Veterans, AI hallucinations typically stem from factors like biases in training data or a lack of fact-checking safeguards built into the model itself. Even with those in place, though, IT Veterans cautions that human oversight is 'essential to mitigate the risks and ensure the reliability of AI-integrated federal data streams'.
Ideally, the FDA has thoroughly considered and taken measures to prevent any mishaps with Elsa's use. But the expansion of technology that really needs human oversight is always concerning when federal agencies are amidst mass layoffs. At the beginning of April, the FDA laid off 3,500 employees, including scientists and inspection staff (although some layoffs were later reversed).
Time will reveal how Elsa ultimately performs. But eventually, the FDA plans to expand its use throughout the agency as it matures. This includes data processing and generative-AI functions to 'further support the FDA's mission.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
26 minutes ago
- Yahoo
The ‘hallucinations' that haunt AI: Why chatbots struggle to tell the truth
The world's leading artificial intelligence groups are stepping up efforts to reduce the number of 'hallucinations' in large language


Business Insider
38 minutes ago
- Business Insider
OpenAI Partners with the UK Government to Advance AI Development
ChatGPT-maker OpenAI and the UK government have teamed up to work on AI security research and explore investments in British AI infrastructure, including data centers. The move is expected to help OpenAI grow its presence in Europe and influence how governments use and control advanced AI. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Make smarter investment decisions with TipRanks' Smart Investor Picks, delivered to your inbox every week. Key Terms of the Deal As part of this agreement, OpenAI will expand its London office, its first international location established two years ago. This expansion will involve growing its local research and engineering teams. Further, OpenAI will share technical details with the UK's AI Safety Institute. This will help the government better understand what AI can do and what risks it might pose. The deal also aims to explore how AI can be used in areas like justice, defense, security, and education in the UK. All uses will follow strict UK rules and will aim to improve public services and make them more efficient. The initiative aligns with the UK government's plan to invest £1 billion in computing infrastructure for AI development to increase public compute capacity twentyfold over the next five years. This includes the creation of AI Growth Zones, regional hubs designed to attract billions in private investment and create high-paying tech jobs. JPMorgan Initiates Coverage on OpenAI JPMorgan (JPM) recently initiated coverage on private companies, including OpenAI. The firm said OpenAI has grown from a small research lab into a major industry player and the third most valuable private company globally. The firm believes ChatGPT's strong brand and early lead could help OpenAI tap into a $700 billion market by 2030. The firm noted that while OpenAI remains the best-capitalized private AI startup, profitability is not expected until 2029, leaving investors to weigh growth against long-term sustainability. Further, JPM believes that OpenAI must expand revenue sources quickly to justify its steep valuation, currently 27x projected 2025 revenue, above the industry average.
Yahoo
an hour ago
- Yahoo
OpenAI and UK sign deal to use AI in public services
OpenAI, the firm behind ChatGPT, has signed a deal to use artificial intelligence to increase productivity in the UK's public services, the government has announced. The agreement signed by the firm and the science department could give OpenAI access to government data and see its software used in education, defence, security, and the justice system. Technology Secretary Peter Kyle said that "AI will be fundamental in driving change" in the UK and "driving economic growth". The Labour government's eager adoption of AI has previously been criticised by campaigners, such as musicians' who oppose its unlicensed use of their music. The text of the memorandum of understanding says the UK and OpenAI will "improve understanding of capabilities and security risks, and to mitigate those risks". It also says that the UK and OpenAI may develop an "information sharing programme", adding that they will "develop safeguards that protect the public and uphold democratic values". OpenAI chief executive Sam Altman said the plan would "deliver prosperity for all". "AI is a core technology for nation building that will transform economies and deliver growth," he added. The deal comes as the UK government looks for ways to improve the UK's stagnant economy, which is forecast to have grown at 0.1% to 0.2% for the April to June period. The UK government has also made clear it is open to US AI investment, having struck similar deals with OpenAI's rivals Google and Anthropic earlier this year. It said its OpenAI deal "could mean that world-changing AI tech is developed in the UK, driving discoveries that will deliver growth". Generative AI software like OpenAI's ChatGPT can produce text, images, videos, and music from prompts by users. The technology does this based on data from books, photos, film footage, and songs, raising questions about potential copyright infringement or whether data has been used with permission. The technology has also come under fire for giving false information or bad advice based on prompts. WeTransfer says files not used to train AI after backlash Man files complaint after ChatGPT said he killed his children Peers demand more protection from AI for creatives What is AI and how does it work?