
Artificial intelligence and virtual care: Transforming healthcare delivery
Daniel Cody is a Health Care and Life Sciences Member at leading law firm Mintz, and he spoke about the pressing need for AI-driven solutions in medical care, saying that, 'Hospitals are stressed, especially with ongoing threats to Medicaid and other programs. So, the twin goals of improving outcomes and reducing costs are universal.'
Cody went on to list key ways that AI is already improving the experience of providers and patients. 'Remote monitoring devices are more advanced, with AI capabilities. It's not just about helping folks with diabetes and chronic disease track their conditions but being predictive and giving information to their PCPs on a 24/7 basis. AI tools are also fantastic for helping radiologists evaluate images so they can diagnose and start treatment earlier.'
The tools we now call AI have actually been in use for years, giving organizations a long runway to find the ideal approach. 'Five years ago, AI was called clinical decision support,' says Adnan E. Hamid, Regional Vice President and Chief Information Officer at CommonSpirit Health. 'As one of the larger Catholic healthcare systems in the nation, CommonSpirit makes sure that when we select technology, it's human centric and mission centric. The goal is to not replace but augment the human interaction between the clinician and patient.'
expand
To reach this goal, medical organizations must navigate an ever-evolving field of regulations. 'We have a systemwide UC AI Council and similar oversight committees, and a chief AI officer at each medical center. The UC AI Council sponsored the development of the UC Responsible AI Principles, and a publicly-available model risk assessment guide with procurement process questions built in. We offer an AI primer, and many of our education webinars are open to the public. Twenty UC policies connect to UC AI guidance, considering the many privacy and security requirements on the campus and health side,' says Noelle Vidal, Healthcare Compliance and Privacy Officer for the Office of the President of the University of California.
Regulations such as HIPAA are all-important when considering whether to use an AI tool, especially since the better-known apps add user data to their own algorithms. 'When ChatGPT was released, our providers were interested in the power of generative AI,' Hamid says. 'But if you enter patient information, it's no longer private but resides as part of the tool. To ensure nobody was accessing ChatGPT from our systems, we accelerated efforts to produce our own internal generative AI tool using Google Gemini on the back end. Data and security are our IT cornerstones.'
AI adds a new layer to assess. As Vidal says, "A thorough assessment can take awhile. Whenever we get a request, it goes through a multi-team scan for privacy, security, and other UC requirements, including the new AI assessment questions. An AI tool's use of data continues to evolve and change how impactful the tool will be. Every change in the technology could contradict what we negotiated earlier in a prior contract with the same vendor. We've got different teams to rank the risk of using a tool. I know it frustrates our stakeholders who want to get every innovation in quickly, but we try to balance adoption with risk prevention.'
Ultimately, only the AI applications with the most practical uses are going to clear the vetting and regulatory process to change how practitioners improve the patient experience and the efficacy of healthcare.
'The targeted tools that solve real problems are going to win,' Cody says. 'They're going to ensure security and privacy compliance.' As noted by Hamid, 'the fastest way to get technology approved is to have a really good use case. If you can provide the details of how the tool will solve a problem, then users will complete that process faster. Ultimately, AI adoption is influenced by the structure and mission of the organization.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
10 minutes ago
- Yahoo
3 in 4 in Singapore not able to identify deepfake content: Cyber Security Agency survey
SINGAPORE - Only one in four people here are able to distinguish between deepfake and legitimate videos, even though a majority said they are confident in identifying deepfake content. This is one of the key findings of a survey released on July 2 by the Cyber Security Agency (CSA) of Singapore. Questions related to deepfakes are new in the Cybersecurity Awareness Survey 2024 given the prevalence of generative artificial intelligence tools that make it easier to create fake content to scam unsuspecting victims. Overall, 1,050 respondents aged 15 and above were polled in October 2024 on their attitude towards issues such as cyber incidents and mobile security, and adoption of cyber hygiene practices. Nearly 80 per cent said they are confident in identifying deepfakes, citing telltale signs such as suspicious content and unsynchronised lip movements. However, only a quarter of them could correctly distinguish between deepfake and legitimate videos when they were put to the test. 'With cyber criminals constantly devising new scam tactics, we need to be vigilant, and make it harder for them to scam us,' said CSA's chief executive David Koh. 'Always stop and check with trusted sources before taking any action, so that we can protect what is precious to us.' Compared with an earlier survey conducted in 2022, more people know what phishing is. But when tested on their ability to distinguish between phishing and legitimate content, only 13 per cent of the respondents were able to correctly identify them, a drop from 24 per cent in 2022. There has been an increase in the installation of cybersecurity apps and adoption of two-factor authentication (2FA) over the years. More respondents have installed security apps in 2024, with 63 per cent having at least one app installed, up from 50 per cent in 2022. The adoption of 2FA across all online accounts and apps also increased from 35 per cent in 2022 to 41 per cent in 2024. Though 36 per cent of respondents in 2024 accepted their mobile devices' updates immediately, 32 per cent preferred to continue using their devices and update later. Those who choose not to update their devices remained low at three per cent, down from four per cent in 2022. Around one quarter of respondents in the 2024 survey said they have been hit with at least one cyber incident, a slight drop from 30 per cent in 2022. There was also a drop in percentage of respondents who perceived that their devices were likely to be compromised by virus or malware, from 60 per cent in 2022 to 57 per cent in 2024. Nearly 40 per cent of people perceived themselves as being at risk of falling for online scams, down from 43 per cent in 2022. Source: The Straits Times © SPH Media Limited. Permission required for reproduction Discover how to enjoy other premium articles here
Yahoo
15 minutes ago
- Yahoo
Berenberg Upgrades Autodesk (ADSK) Stock to Buy from Hold
Autodesk, Inc. (NASDAQ:ADSK) is one of the Top 10 AI and Technology Stocks to Buy According to Analysts. On June 27, Berenberg upgraded the company's stock to 'Buy' from 'Hold' with a price objective of $365, an increase from the prior target of $325, as reported by The Fly. The firm noted a compelling margin-expansion opportunity at Autodesk, Inc. (NASDAQ:ADSK). Unlike earlier AI tools, which simply improved user productivity, the AI agents have the ability to execute business tasks with full autonomy and agency, according to the firm's analyst. A software engineer using AutoCAD Civil 3D to create a 3D design in a modern office setting. The firm believes that this technological advance unlocks fundamental new value in AI adoption for enterprises, and this trend can benefit Autodesk, Inc. (NASDAQ:ADSK). The company is focusing its growth investments on the strategic priorities in cloud, platform, and AI. Furthermore, it continues to optimize its sales and marketing and has been investing to enable future optimization, which fuels higher margins. For Q2 2026, Autodesk, Inc. (NASDAQ:ADSK) expects revenue in the range of $1,720 million – $1,730 million, and EPS (GAAP) of between $1.37 – $1.46. Autodesk, Inc. (NASDAQ:ADSK) is a leading AI and technology business because it is engaged in developing advanced software platforms for engineering, design, and manufacturing. The company uses AI for the automation of design processes and optimization of construction planning. Parnassus Investments, an investment management company, released its Q3 2024 investor letter. Here is what the fund said: 'In Software, we added Autodesk, Inc. (NASDAQ:ADSK) and Cloudflare while exiting We believe Autodesk's dominant position in architecture, engineering and construction software allows it to increase margins and offer attractive revenue growth. Autodesk is a market-leading vertical software company with the ability to meaningfully improve its margins, while its revenue growth should accelerate as it completes its sales channel re-alignment.' While we acknowledge the potential of ADSK to grow, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than ADSK and that has 100x upside potential, check out our report about this cheapest AI stock. READ NEXT: 13 Cheap AI Stocks to Buy According to Analysts and 11 Unstoppable Growth Stocks to Invest in Now Disclosure: None. Insider Monkey focuses on uncovering the best investment ideas of hedge funds and insiders. Please subscribe to our free daily e-newsletter to get the latest investment ideas from hedge funds' investor letters by entering your email address below.


Tom's Guide
16 minutes ago
- Tom's Guide
I use AI every day but there's one thing I'll never trust ChatGPT with — here's why
I use AI every day for nearly everything. I've asked ChatGPT and other chatbots for help to plan my schedule, brainstorm ideas, help fix the taillight on my Jeep and even act as my therapist in a pinch. I've used it to research my family history, launch a side hustle, and yes — despite every disclaimer — I've turned to it for medical advice and financial insights. (And honestly, I'll probably do it again.) I've even shared things with chatbots that, in hindsight, were way too personal or may have put my privacy at risk. At this point, ChatGPT probably knows more about me than some of my friends do. AI has made many aspects of life easier. It's saved me time, simplified research, and helped me make better decisions. I find it incredibly useful for parenting, learning and productivity. But despite how integrated it's become in my daily life, there's one thing I won't trust ChatGPT with: creative advice. Merriam-Webster defines creativity as 'the ability to make or otherwise bring into existence something new.' That newness, the human spark, the instinct to push boundaries, surprise people or break form entirely, is something AI simply doesn't have. ChatGPT can remix, reframe and generate outputs that look creative, but they're always grounded in existing data. It's not creating something new. It's recombining what's already out there. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Don't get me wrong: ChatGPT can be incredibly helpful in the creative process. I use tools like ChatGPT and Gemini Canvas to help edit my work, smooth out clunky transitions and even provide feedback on flow or tone. Sometimes I'll paste in a paragraph and say, 'Does this sound too repetitive?' or 'Give me five punchier ways to say this,' and the AI delivers. It's like having a helpful writing assistant who works instantly and never complains about my OCD. When I'm working on a novel, developing a character arc or trying to write a scene that taps into something deeply emotional or unexpected, I don't want AI's help. That's the part I guard fiercely. I want the choices to be mine; the risks, the weirdness, the vulnerability, the voice. That's the stuff AI can't replicate, because it can't truly feel, or experience or imagine the way humans do. Even when AI outputs something impressive, there's often a flatness to it. The ideas are clean, the rhythm is passable, but the soul is missing. It rarely surprises me. And the more I lean on it in the early creative stages, the more I feel like I'm losing my own voice. So yes; I'll continue to use AI to do everything else. It helps me think faster, get organized, polish drafts and fact-check weird things like 'What's the fastest way to defrost a roast?' and "Does this look like popped blood vessel in my eye?" after uploading an image. But when it comes to creativity — the kind that pulls from emotion, memory, risk and human intuition — I'll keep that between me and the blinking cursor. For now, and maybe (hopefully) forever, that part stays human.