Latest news with #OrenEtzioni


Boston Globe
14-06-2025
- Health
- Boston Globe
Robert F. Kennedy Jr. wants health agencies to use a lot more AI. After the MAHA report, experts have some concerns
Advertisement And with the apparent inclusion of material imagined by AI, referred to as hallucinations, in the 'Make America Healthy Again' report, they see an ominous sign for the administration's ability to deploy it safely. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up 'If they were proceeding more apace, and I'm not talking about glacial government pace, I'm talking about responsible pace, this wouldn't happen,' said Oren Etzioni, a professor emeritus at the University of Washington and entrepreneur who studies AI. 'The presence of these embarrassing missteps just shows that it's amateur hour.' Kennedy has expressed ambitious but vague plans, usually in the context of cutting costs. He has outlined his vision during several congressional hearings, including one to replace the use of animals in experimental testing and some steps in clinical trials. 'We're phasing out most animal studies . . . because we can accomplish a lot of those goals on safety and efficacy with AI technology,' Kennedy said. Advertisement He also mentioned using AI to analyze data that HHS and other agencies have collected on patients, such as people on Medicare and Medicaid. He said they've recruited experts to 'transform our agency for a central hub for AI.' So far, the Food and Drug Administration has announced computer and AI modeling. 'In the long-term (3-5 years), FDA will aim to make animal studies the exception rather than the norm for pre-clinical safety/toxicity testing,' the FDA said in its road map. But the mistakes in the 'Make America Healthy Again' report have experts skeptical of HHS's ability to use AI correctly. The report cited sources that do not exist and had garbled footnotes, The agency declined to answer definitively whether AI was used on the report and why it contained nonexistent citations, but rather only highlighted the substance of the report. It also did not offer specifics about protocols for responsible AI use. 'HHS is addressing the risk of AI-generated errors through rigorous validation, human oversight, and strict quality controls,' a spokesperson said in a statement. 'AI tools are designed to support — not replace — expert judgment.' Advertisement Experts say the specifics of how AI is implemented will bethe true measure of whether the efforts at HHS will succeed or end up being harmful. 'I'm actually deeply optimistic about what [AI] can do in a lot of areas, including the ones that the secretary mentioned,' said Ziad Obermeyer, a physician and researcher at the University of California Berkeley who studies AI in biomedicine. 'What my research has shown is that it actually comes down to some of the really boring details that make the difference between a good, powerful algorithm that helps people, and one that really messes things up.' Republicans who work on AI issues in the Senate supported Kennedy's goals but also agreed on the importance of rolling it out with the right protections. 'This is going to be the future,' said Indiana Senator Todd Young. 'I mean, we'd be doing something wrong if, if the head of our health agency wasn't talking about using AI.' The two general use cases that Kennedy has mentioned, replacing steps in clinical trials and analyzing patient data, have some potential issues in common, including that AI can generate false information. But they also have risks unique to each case. Privacy, for example, is a serious concern with patient data. If not properly stripped of identifying factors, even supposedly anonymized data can be re-identified, as has happened in some cases. 'The most secret private information that people have is their health care data, and so AI should not be used in any way that does not have the strongest possible safeguards,' said Massachusetts Senator Ed Markey, a Democrat. 'We could have an absolute privacy catastrophe.' Advertisement Harvard Law School professor I. Glenn Cohen, who studies medical ethics and AI, said that the idea holds great potential, but that the administration would need to be very transparent about how it is protecting data and would be wise to run smaller pilot studies first. 'The 'move fast and break things' ethos of Silicon Valley may be appropriate in some parts of life — I don't really care if you're doing it for the order of Instagram postings," Cohen said. 'But it's not a philosophy we advocate for physicians or an attitude I think most people want health care to take.' The key limitation of AI is that it is only as good as the dataset used to build it. In specific areas where scientific data is really good and outcomes are predictable, such as in the structure of proteins, scientists have built powerful AI tools. AI can also help doctors and patients assess symptoms. But those are different from the discovery of new information, experts say, which is what a lot of science and clinical trials for novel treatments are designed to explore. Allison Coffin is a researcher at Creighton University who studies hearing loss, including that caused by certain medicines. She uses mostly zebra fish in her work, but also rodents. She says her lab is working on AI tools to help identify potential toxins in order to conduct more targeted research. But, she said, AI would always be used as an idea generator for testing in animals, not to replace them. 'That's an excellent case for AI, because AI can rapidly assess millions of potential drug structures. But you would still want to test their efficacy for new therapies in an animal,' Coffin said. 'I would never want to take a medication that hadn't been given to a living creature before, and I would think most people wouldn't. Do we want to be the first to take medication because a computer model says that it's safe?' Advertisement Other scientists questioned the ability of the government to do the cutting-edge research necessary after the administration's deep cuts to research funding and staff. 'Honestly I'm struggling for what to say,' wrote Sean Eddy, a Harvard scientist who works on building computer models for biology and genomic research. 'I just don't see how it makes sense for HHS to talk about delivering innovative technological breakthroughs while they're destabilizing and belittling the US scientific research enterprise. . . . Every lab at Harvard that does this kind of research, including my own, has had all their federal funding terminated.' Experts also question whether Kennedy understands where AI technology actually is today versus its potential capacities. Many cited cautionary tales of much lower-stakes AI deployment gone wrong, such as companies that 'Doing things like simulating an entire body in order to save clinical trials is just grossly unrealistic where we sit right now,' said Gary Marcus, a professor emeritus at New York University and critic of AI enthusiasm. 'If we're lucky, we can do it in 40 [years], but we certainly can't now. That's just a pipe dream.' Advertisement Tal Kopan can be reached at


CNN
10-06-2025
- Business
- CNN
The next tech revolution probably won't look anything like the last one
In a little more than two years, AI has gone from powering what was once a niche chatbot to being a catalyst for what some tech leaders are calling a tidal wave that could be as life changing as the internet. AI is being framed as the next major iteration of how people use technology. But that shift is already unfolding very differently from how other major technological advancements have played out, like the internet, social media and the smartphone. While Apple and Samsung release new smartphones once or twice a year, AI models are constantly evolving — and are far less predictable than cyclical products. '(AI models) are launching far, far faster than once a year. These updates are actually fast and furious,' Oren Etzioni, former CEO of the nonprofit Allen Institute for Artificial Intelligence, told CNN. '…These models can be opaque, unpredictable (and) difficult to measure because they're so general.' And the way people embrace AI assistants might look different from how smartphones, web browsers and social media apps have shown up in our lives. In the smartphone and social media industries, major players such as Apple, Google and Meta emerged early on and cemented their position for more than a decade. But in the AI industry, being first may not necessarily always guarantee long-term success. 'It seems like it won't be a sort of 'winner-takes-all' market,' said Daniel Keum, associate professor of management at Columbia Business School. AI somehow seems like it's moving lightning fast but also not quickly enough, as evidenced by product delays such as Apple's revamped version of Siri, which has been pushed back indefinitely. Apple's Worldwide Developers Conference keynote, which took place on Monday, did little to change that. The company announced new AI-powered tools for language translation and updates to its image-based search tool among other changes, but didn't say when its upgraded virtual helper would arrive. 'As we've shared, we're continuing our work to deliver the features that make Siri even more personal,' Apple's senior vice president of software engineering, Craig Federighi, said during the event, echoing comments from CEO Tim Cook on the company's most recent earnings call. 'This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year.' And it's not just Apple; OpenAI has yet to release its anticipated GPT-5 model, and Meta is said to have pushed back the launch of its next major Llama model, according to The Wall Street Journal. Meanwhile, the advancement and adoption of AI are accelerating; performance has significantly improved from 2023 to 2024; and 78% of businesses reported using AI in 2024, up from 55% in 2023, according to Stanford University's 2025 AI Index Report. Part of the reason timelines for AI updates seem to be shifting so frequently is that performance can be hard to quantify, says Etzioni. An AI tool might excel in one area and fall behind in another, and a small change may lead to an unpredictable shift in how the product works. In May, ChatGPT became 'annoying' after an update and xAI's Grok chatbot went on unprompted rants about 'white genocide' in South Africa. Tech companies are also still establishing how frequently they'll be able to release major updates that significantly impact how consumers use AI chatbots and models versus smaller, more incremental updates. That differs from more familiar tech categories like phones and laptops, and even software upgrades like new versions of Google's Android and Apple's iOS, which get major platform-wide updates each year. Changes in AI might see less fanfare because the technology's advancements 'are becoming more incremental and harder to label and to present to customers as really significant changes,' said Leo Gebbie, principal analyst at tech analysis firm CCS Insight. Tech that defined the early 2000s, like Facebook and the iPhone, heavily benefited from what's known as 'the network effect,' or the idea that the more people use a product, the more valuable it becomes. Without its massive network of users, Facebook wouldn't have become the social network behemoth it is today. Facebook parent Meta says its products are used by 3 billion people worldwide. Apple-exclusive services like iMessage are a major selling point for Apple's products. But AI platforms like OpenAI's ChatGPT and Google's Gemini are not meant to be social. While these services will likely improve the more people use them, it doesn't really matter whether a person's friends or family are using them. AI assistants become more useful as they get to know you. 'It's people doing their own individual tasks,' said Darrell West, senior fellow at the Brookings Institution's Center for Technology Innovation. 'It's not like the platform becomes more valuable if all your friends are on the same platform.' AI may also defy the longstanding narrative that being first is best in the technology industry, as was the case in smartphones, social media and web browsers. Apple's iOS and Google's Android dominate the mobile device market, marking the end of mobile platforms from the pre-smartphone era like BlackBerry OS, Microsoft's Windows Phone and Nokia's Symbian. Google's Chrome browser accounts for roughly 67% of global browser usage, while even Apple's Safari is a distant second at about 17%, according to StatCounter GlobalStats. And Americans tend to remain devoted to their smartphone platform of choice, as iOS and Android both see customer loyalty rates at above 90%, Consumer Intelligence Research Partners reported in 2023. But it's unclear whether similar usage patterns will emerge in AI. While users may be inclined to stick with chatbots and services that learn their preferences, it's also possible people may use multiple specialized services for different tasks. That takes the pressure off companies to worry about falling behind, as consumers may not be as tightly locked into whichever AI service they happen to use first. 'Even if I fall behind, like a quarter generation, I can easily catch up,' said Keum. 'And once I improve, people will come back to me.'


CNN
10-06-2025
- Business
- CNN
The next tech revolution probably won't look anything like the last one
In a little more than two years, AI has gone from powering what was once a niche chatbot to being a catalyst for what some tech leaders are calling a tidal wave that could be as life changing as the internet. AI is being framed as the next major iteration of how people use technology. But that shift is already unfolding very differently from how other major technological advancements have played out, like the internet, social media and the smartphone. While Apple and Samsung release new smartphones once or twice a year, AI models are constantly evolving — and are far less predictable than cyclical products. '(AI models) are launching far, far faster than once a year. These updates are actually fast and furious,' Oren Etzioni, former CEO of the nonprofit Allen Institute for Artificial Intelligence, told CNN. '…These models can be opaque, unpredictable (and) difficult to measure because they're so general.' And the way people embrace AI assistants might look different from how smartphones, web browsers and social media apps have shown up in our lives. In the smartphone and social media industries, major players such as Apple, Google and Meta emerged early on and cemented their position for more than a decade. But in the AI industry, being first may not necessarily always guarantee long-term success. 'It seems like it won't be a sort of 'winner-takes-all' market,' said Daniel Keum, associate professor of management at Columbia Business School. AI somehow seems like it's moving lightning fast but also not quickly enough, as evidenced by product delays such as Apple's revamped version of Siri, which has been pushed back indefinitely. Apple's Worldwide Developers Conference keynote, which took place on Monday, did little to change that. The company announced new AI-powered tools for language translation and updates to its image-based search tool among other changes, but didn't say when its upgraded virtual helper would arrive. 'As we've shared, we're continuing our work to deliver the features that make Siri even more personal,' Apple's senior vice president of software engineering, Craig Federighi, said during the event, echoing comments from CEO Tim Cook on the company's most recent earnings call. 'This work needed more time to reach our high-quality bar, and we look forward to sharing more about it in the coming year.' And it's not just Apple; OpenAI has yet to release its anticipated GPT-5 model, and Meta is said to have pushed back the launch of its next major Llama model, according to The Wall Street Journal. Meanwhile, the advancement and adoption of AI are accelerating; performance has significantly improved from 2023 to 2024; and 78% of businesses reported using AI in 2024, up from 55% in 2023, according to Stanford University's 2025 AI Index Report. Part of the reason timelines for AI updates seem to be shifting so frequently is that performance can be hard to quantify, says Etzioni. An AI tool might excel in one area and fall behind in another, and a small change may lead to an unpredictable shift in how the product works. In May, ChatGPT became 'annoying' after an update and xAI's Grok chatbot went on unprompted rants about 'white genocide' in South Africa. Tech companies are also still establishing how frequently they'll be able to release major updates that significantly impact how consumers use AI chatbots and models versus smaller, more incremental updates. That differs from more familiar tech categories like phones and laptops, and even software upgrades like new versions of Google's Android and Apple's iOS, which get major platform-wide updates each year. Changes in AI might see less fanfare because the technology's advancements 'are becoming more incremental and harder to label and to present to customers as really significant changes,' said Leo Gebbie, principal analyst at tech analysis firm CCS Insight. Tech that defined the early 2000s, like Facebook and the iPhone, heavily benefited from what's known as 'the network effect,' or the idea that the more people use a product, the more valuable it becomes. Without its massive network of users, Facebook wouldn't have become the social network behemoth it is today. Facebook parent Meta says its products are used by 3 billion people worldwide. Apple-exclusive services like iMessage are a major selling point for Apple's products. But AI platforms like OpenAI's ChatGPT and Google's Gemini are not meant to be social. While these services will likely improve the more people use them, it doesn't really matter whether a person's friends or family are using them. AI assistants become more useful as they get to know you. 'It's people doing their own individual tasks,' said Darrell West, senior fellow at the Brookings Institution's Center for Technology Innovation. 'It's not like the platform becomes more valuable if all your friends are on the same platform.' AI may also defy the longstanding narrative that being first is best in the technology industry, as was the case in smartphones, social media and web browsers. Apple's iOS and Google's Android dominate the mobile device market, marking the end of mobile platforms from the pre-smartphone era like BlackBerry OS, Microsoft's Windows Phone and Nokia's Symbian. Google's Chrome browser accounts for roughly 67% of global browser usage, while even Apple's Safari is a distant second at about 17%, according to StatCounter GlobalStats. And Americans tend to remain devoted to their smartphone platform of choice, as iOS and Android both see customer loyalty rates at above 90%, Consumer Intelligence Research Partners reported in 2023. But it's unclear whether similar usage patterns will emerge in AI. While users may be inclined to stick with chatbots and services that learn their preferences, it's also possible people may use multiple specialized services for different tasks. That takes the pressure off companies to worry about falling behind, as consumers may not be as tightly locked into whichever AI service they happen to use first. 'Even if I fall behind, like a quarter generation, I can easily catch up,' said Keum. 'And once I improve, people will come back to me.'


Daily Tribune
07-06-2025
- Daily Tribune
AI-generated Pope sermons flood YouTube, TikTok
AFP | Paris AI-generated videos and audios of Pope Leo XIV are populating rapidly online, racking up views as platforms struggle to police them. An AFP investigation identified dozens of YouTube and TikTok pages that have been churning out AI-generated messages delivered in the pope's voice or otherwise attributed to him since he took charge of the Catholic Church last month. The hundreds of fabricated sermons and speeches, in English and Spanish, underscore how easily hoaxes created using artificial intelligence can elude detection and dupe viewers. 'There's natural interest in what the new pope has to say, and people don't yet know his stance and style,' said University of Washington professor emeritus Oren Etzioni, founder of a nonprofit focused on fighting deepfakes. 'A perfect opportunity to sow mischief with AI-generated misinformation.' After presenting YouTube with 26 channels posting predominantly AI-generated pope content, the platform terminated 16 of them for violating its policies against spam, deceptive practices and scams, and another for violating YouTube's terms of service. The company also booted an additional six pages from its partner program allowing creators to monetize their content. TikTok similarly removed 11 accounts that were pointed out -- with over 1.3 million combined followers -- citing the platform's policies against impersonation, harmful misinformation and misleading AI-generated content of public figures. With names such as 'Pope Leo XIV Vision,' the social media pages portrayed the pontiff supposedly offering a flurry of warnings and lessons he never preached.


Express Tribune
06-06-2025
- Express Tribune
TikTok, YouTube rack up views of AI-generated Pope sermons
AI-generated videos and audios of Pope Leo XIV are populating rapidly online, racking up views as platforms struggle to police them. An AFP investigation identified dozens of YouTube and TikTok pages that have been churning out AI-generated messages delivered in the pope's voice or otherwise attributed to him since he took charge of the Catholic Church last month. The hundreds of fabricated sermons and speeches, in English and Spanish, underscore how easily hoaxes created using artificial intelligence can elude detection and dupe viewers. "There's natural interest in what the new pope has to say, and people don't yet know his stance and style," said University of Washington professor emeritus Oren Etzioni, founder of a nonprofit focused on fighting deepfakes. "A perfect opportunity to sow mischief with AI-generated misinformation." After AFP presented YouTube with 26 channels posting predominantly AI-generated pope content, the platform terminated 16 of them for violating its policies against spam, deceptive practices and scams, and another for violating YouTube's terms of service. "We terminated several channels flagged to us by AFP for violating our Spam policies and Terms of Service," spokesperson Jack Malon said. TikTok similarly removed 11 accounts that AFP pointed out – with over 1.3 million combined followers – citing the platform's policies against impersonation, harmful misinformation and misleading AI-generated content of public figures. With names such as "Pope Leo XIV Vision," the social media pages portrayed the pontiff supposedly offering a flurry of warnings and lessons he never preached. But disclaimers annotating their use of AI were often hard to find – and sometimes nonexistent. On YouTube, a label demarcating "altered or synthetic content" is required for material that makes someone appear to say something they did not. But such disclosures only show up toward the bottom of each video's click-to-open description. A YouTube spokesperson said the company has since applied a more prominent label to some videos on the channels flagged by AFP that were not found to have violated the platform's guidelines. TikTok also requires creators to label posts sharing realistic AI-generated content, though several pope-centric videos went unmarked. A TikTok spokesperson said the company proactively removes policy-violating content and uses verified badges to signal authentic accounts. Brian Patrick Green, director of technology ethics at Santa Clara University, said the moderation difficulties are the result of rapid AI developments inspiring "chaotic uses of the technology." The AI-generated sermons not only "corrode the pope's moral authority" and "make whatever he actually says less believable," Green said, but could be harnessed "to build up trust around your channel before having the pope say something outrageous or politically expedient." afp