logo
AI in health care could save lives and money − but change won't happen overnight

AI in health care could save lives and money − but change won't happen overnight

Japan Today18 hours ago
By Turgay Ayer
Imagine walking into your doctor's office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what's wrong.
This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives.
What's more, a 2023 study found that if the health care industry significantly increased its use of AI, up to $360 billion annually could be saved.
But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low.
A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses.
I'm a professor and researcher who studies AI and health care analytics. I'll try to explain why AI's growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI's widespread adoption by the medical industry.
Inaccurate diagnoses, racial bias
Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care.
AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care.
But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn't perfectly match the patient in front of them.
As a result, AI doesn't always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations.
Racial and ethnic bias is another issue. If data includes bias because it doesn't include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened.
Data-sharing concerns, unrealistic expectations
Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor's offices simply don't have the time, personnel, money or will to implement AI.
Also, many cutting-edge AI systems operate as opaque 'black boxes.' They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification.
But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings.
There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records.
For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient's data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards.
Privacy concerns also extend to patients' trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care.
The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises.
Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they're safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations.
Incremental change
Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time.
Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help.
Suffice to say that health care's transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI's potential to treat millions and save trillions awaits.
Turgay Ayer is Professor of Industrial and Systems Engineering, Georgia Institute of Technology.
The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.
External Link
https://theconversation.com/ai-in-health-care-could-save-lives-and-money-but-change-wont-happen-overnight-241551
© The Conversation
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Jensen Huang, AI visionary in a leather jacket
Jensen Huang, AI visionary in a leather jacket

Japan Times

time14 hours ago

  • Japan Times

Jensen Huang, AI visionary in a leather jacket

Unknown to the general public just three years ago, Jensen Huang is now one of the most powerful entrepreneurs in the world as head of chip giant Nvidia. The unassuming 62-year-old draws stadium crowds of more than 10,000 people as his company's products push the boundaries of artificial intelligence. Chips designed by Nvidia, known as graphics cards or GPUs (Graphics Processing Units), are essential in developing the generative artificial intelligence powering technology like ChatGPT. Big tech's insatiable appetite for Nvidia's GPUs, which sell for tens of thousands of dollars each, has catapulted the California chipmaker beyond $4 trillion in market valuation, the first company ever to surpass that mark. Nvidia's meteoric rise has boosted Huang's personal fortune to $150 billion — making him one of the world's richest people — thanks to the roughly 3.5% stake he holds in the company he founded three decades ago with two friends in a Silicon Valley diner. In a clear demonstration of his clout, he recently convinced U.S. President Donald Trump to lift restrictions on certain GPU exports to China, despite the fact that China is locked in a battle with the United States for AI supremacy. "That was brilliantly done," said Jeffrey Sonnenfeld, a governance professor at Yale University. Huang was able to explain to Trump that "having the world using a U.S. tech platform as the core protocol is definitely in the interest of this country" and won't help the Chinese military, Sonnenfeld said. Early life Born in Taipei in 1963, Jensen Huang (originally named Jen-Hsun) embodies the American success story. At nine years old, he was sent away with his brother to boarding school in small-town Kentucky. His uncle recommended the school to his Taiwanese parents believing it to be a prestigious institution, when it was actually a school for troubled youth. Too young to be a student, Huang boarded there but attended a nearby public school alongside the children of tobacco farmers. With his poor English, he was bullied and forced to clean toilets — a two-year ordeal that transformed him. U.S. President Donald Trump hosts Huang for an event to discuss U.S. technology investments at the White House in Washington on April 30. | Pete Marovich / The New York Times "We worked really hard, we studied really hard, and the kids were really tough," he recounted in an interview with U.S. broadcaster NPR. But "the ending of the story is I loved the time I was there," Huang said. Leather jacket and tattoo Brought home by his parents, who had by then settled in the northwestern U.S. state of Oregon, he graduated from university at just 20 and joined AMD, then LSI Logic, to design chips — his passion. But he wanted to go further and founded Nvidia in 1993 to "solve problems that normal computers can't," using semiconductors powerful enough to handle 3D graphics, as he explained on the "No Priors" podcast. Nvidia created the first GPU in 1999, riding the intersection of video games, data centers, cloud computing, and now, generative AI. Always dressed in a black T-shirt and leather jacket, Huang sports a Nvidia logo tattoo and has a taste for sports cars. But it's his relentless optimism, low-key personality and lack of political alignment that sets him apart from the likes of Elon Musk and Mark Zuckerberg. Unlike them, Huang was notably absent from Trump's inauguration ceremony. "He backpedals his own aura and has the star be the technology rather than himself," observed Sonnenfeld, who believes Huang may be "the most respected of all today's tech titans." One former high-ranking Nvidia employee described him as "the most driven person" he'd ever met. Street food On visits to his native Taiwan, Huang is treated like a megastar, with fans crowding him for autographs and selfies as journalists follow him to the barber shop and his favorite night market. "He has created the phenomena because of his personal charm," noted Wayne Lin of Witology Market Trend Research Institute. "A person like him must be very busy and his schedule should be full every day meeting big bosses. But he remembers to eat street food when he comes to Taiwan," he said, calling Huang "unusually friendly." Nvidia is a tight ship and takes great care to project a drama-free image of Huang. But the former high-ranking employee painted a more nuanced picture, describing a "very paradoxical" individual who is fiercely protective of his employees but also capable, within Nvidia's executive circle, of "ripping people to shreds" over major mistakes or poor choices.

AI in health care could save lives and money − but change won't happen overnight
AI in health care could save lives and money − but change won't happen overnight

Japan Today

time18 hours ago

  • Japan Today

AI in health care could save lives and money − but change won't happen overnight

By Turgay Ayer Imagine walking into your doctor's office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what's wrong. This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives. What's more, a 2023 study found that if the health care industry significantly increased its use of AI, up to $360 billion annually could be saved. But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low. A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses. I'm a professor and researcher who studies AI and health care analytics. I'll try to explain why AI's growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI's widespread adoption by the medical industry. Inaccurate diagnoses, racial bias Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care. AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care. But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn't perfectly match the patient in front of them. As a result, AI doesn't always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations. Racial and ethnic bias is another issue. If data includes bias because it doesn't include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened. Data-sharing concerns, unrealistic expectations Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor's offices simply don't have the time, personnel, money or will to implement AI. Also, many cutting-edge AI systems operate as opaque 'black boxes.' They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification. But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings. There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records. For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient's data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards. Privacy concerns also extend to patients' trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care. The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises. Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they're safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations. Incremental change Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time. Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help. Suffice to say that health care's transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI's potential to treat millions and save trillions awaits. Turgay Ayer is Professor of Industrial and Systems Engineering, Georgia Institute of Technology. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation

Humanoid Artist Says Not Aiming to ‘Replace Humans'
Humanoid Artist Says Not Aiming to ‘Replace Humans'

Yomiuri Shimbun

timea day ago

  • Yomiuri Shimbun

Humanoid Artist Says Not Aiming to ‘Replace Humans'

GENEVA (AFP-Jiji) — When successful artist Ai-Da unveiled a new portrait of King Charles III last week, the humanoid robot described what inspired the layered and complex piece, and insisted it had no plans to 'replace' humans. The ultra-realistic robot, one of the most advanced in the world, is designed to resemble a human woman with an expressive, life-like face, large hazel eyes and brown hair cut in a bob. The arms, though, are unmistakably robotic, with exposed metal, and can be swapped out depending on the art form it is practicing. Late last year, Ai-Da's portrait of English mathematician Alan Turing became the first artwork by a humanoid robot to be sold at auction, fetching over $1 million. But as Ai-Da unveiled its latest creation — an oil painting entitled 'Algorithm King,' conceived using artificial intelligence — the humanoid insisted the work's importance could not be measured in money. 'The value of my artwork is to serve as a catalyst for discussions that explore ethical dimensions to new technologies,' the robot told AFP at Britain's diplomatic mission in Geneva, where the new portrait of King Charles will be housed. The idea, Ai-Da insisted in a slow, deliberate cadence, was to 'foster critical thinking and encourage responsible innovation for more equitable and sustainable futures.' 'Unique and creative' Speaking on the sidelines of the United Nations' AI for Good summit, Ai-Da, who has done sketches, paintings and sculptures, detailed the methods and inspiration behind the work. 'When creating my art, I use a variety of AI algorithms,' the robot said. 'I start with a basic idea or concept that I want to explore, and I think about the purpose of the art. What will it say?' The humanoid pointed out that 'King Charles has used his platform to raise awareness on environmental conservation and interfaith dialogue. I have aimed this portrait to celebrate' that, it said, adding that 'I hope King Charles will be appreciative of my efforts.' Aidan Meller, a specialist in modern and contemporary art, led the team that created Ai-Da in 2019 with artificial intelligence specialists at the universities of Oxford and Birmingham. He told AFP that he had conceived the humanoid robot — named after the world's first computer programmer Ada Lovelace — as an ethical arts project, and not 'to replace the painters.' Ai-Da agreed. There is 'no doubt that AI is changing our world, [including] the art world and forms of human creative expression,' the robot acknowledged. But 'I do not believe AI or my artwork will replace human artists.' Instead, Ai-Da said, the aim was 'to inspire viewers to think about how we use AI positively, while remaining conscious of its risks and limitations.' Asked if a painting made by a machine could really be considered art, the robot insisted that 'my artwork is unique and creative.' 'Whether humans decide it is art is an important and interesting point of conversation.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store