How AI Will Impact Healthcare Risk Management: An Interview with AI Expert
- Meghan Anzelc, Ph.D.

- Feb 24
- 7 min read
Updated: Mar 17
[ Reprinted with permission from The Association for Healthcare Risk Management of New York, Inc., Risk Management Journal, Volume I 2024 ]

There seem to be new headlines every day about how Artificial Intelligence (AI) is impacting our work and lives. Industries are being disrupted and AI is transforming how people do their work. As Bill Gates says, “no one has all the answers on AI.” Healthcare risk management will surely be impacted as well. Our conversation about AI will center on some thoughts that Healthcare risk managers can use. To help us beter understand the current landscape and what the future may hold for AI in healthcare, I sat down with Dr. Meghan Anzelc, a physics PhD whose career has been in using AI to help companies perform better.
| Bonnie: Meghan, to get us started, how do you define artificial intelligence, also called “AI”?
Meghan: The term itself has been around a long time, coined by John McCarthy1 in 1956, but for today’s use of the term the Encyclopedia Britannica’s definition is a good one. It states, “artificial intelligence (AI) is the ability of a digital computer or computercontrolled robot to perform tasks commonly associated with intelligent beings.” And intelligent beings are those that can adapt to changing circumstances, like ourselves.
You’ve probably also heard the terms “machine learning” and “generative AI” and these are both under the umbrella of AI. Machine learning (called “ML” for short) is a subset of AI that applies algorithms to data to uncover paterns. Your email provider’s spam filtering uses machine learning and so do many other common applications we interact with, like recommended products on a shopping website or the price we pay for airline tickets. Generative AI is a branch of AI that creates new content based on a set of training data. ChatGPT is a specific generative AI tool that outputs sentences and paragraphs based on a prompt question from the user. Other generative AI tools create images or video as well as tools that create different types of sound like music and voice.
| Bonnie: We’ve seen a lot in the news about these capabilities and some of it sounds pretty scary. What are your thoughts on the risks of AI? In my opinion those risk managers that do not consider AI as a risk are missing the boat.
Meghan: We’re a long ways away from some of the nightmare scenarios you may have read about. There is no sentient AI in existence. There is no AI being used today that can’t be turned off if we want to get rid of it. Fundamentally, AI is applying math to data. It’s just complex math and lots and lots of data.
Fear aside, there are definitely real risks with AI. Biased outcomes from AI tools is a big issue and that comes from having biased data used to train these systems. Fear aside, there are definitely real risks with AI. Biased outcomes from AI tools is a big issue and that comes from having biased data used to train these systems. As an example, back in 2018 Amazon3 had to stop using a recruiting tool they had built because it was biased against women, and the tool was biased against women because their workforce had been predominantly male in the past. Recently, class-action lawsuits have been filed against health insurers for mis-using an AI tool to deny Medical claims. In healthcare, there is a long history5 of research showing that pulse oximeters have racial bias and don’t work nearly as well for non-White patients.
| Bonnie: Hopefully, Meghan, this will not add to one of our major issues in healthcare, on ‘disparity of healthcare access and treatment based on race.’
Meghan: Completely agree Bonnie. It’s critically important we use AI to improve healthcare access and treatment for all, not inadvertently encode historical biases and disparities. Fundamentally, the risks really come from us and how we build and use these tools. We have to take a thoughful and measured approach to ensure ethical and responsible development and use of AI that also follows regulations and best practices for ensuring privacy and security.
| Bonnie: What are some of the current ways AI is used in healthcare today? I think it has been used for a number of years in diagnostic etc., however we are now focused on its universal appeal.
Meghan: There have been a number of advancements in helping identify and diagnose risks and diseases in patients. This includes a number of models6 built to predict sepsis in patients, AI-based tools to help diagnose skin cancer, and applications to help diagnose conditions that typically don’t present symptoms, like left ventricular dysfunction. While there are many opportunities for AI to improve diagnosis and patient outcomes, it’s important to consider how best to balance deep expertise of clinicians and individual circumstance with tools that can help in many situations but aren’t perfect and don’t always have all of the relevant data. It’s also critical to follow outcomes over time, to provide feedback to both the AI tools and those working in healthcare so that tools and people alike can continue to improve and learn.
| Bonnie: What about patient safety and other risks that are specific to healthcare? Are these applications to AI to these areas?
Meghan: Yes, there are a number of areas being explored in this space. This includes predicting patients at higher risk for hospital-acquired infections, such as ventilator-associated pneumonia, as well as helping ensure better compliance with risk mitigation protocols like hand hygiene compliance.9 Other applications Include identifying patients at higher risk for venous thromboembolism and those at higher risk for surgical complications. Again, tracking of outcomes is needed to measure if, when, and how much these tools improve patient safety and when they do (or don’t) perform better than existing approaches to risk management and patient safety.
| Bonnie: It sounds like there are a lot of areas where AI is already being tested or used in clinical settings. What about tools we’re hearing about in the news, like ChatGPT? Are there ways Generative AI is likely to be used in healthcare settings?
Meghan: Absolutely. One of the most interesting applications I’ve seen is to automatically generate answers to patient questions submited online, helping cut down on the time clinicians have to spend answering straightforward or simpler questions. A recent study published in JAMA Internal Medicine showed that not only can AI chatbots answer questions accurately, they can also project more empathy at times than the clinician does. We often think of empathy as a uniquely human trait, but we’ve all had days where we were so busy or overwhelmed or stressed that we didn’t show as much empathy as we could or should have. This kind of application could be a win for everyone, cutting down on the valuable time clinicians spend answering simple questions and patients getting both the information they need and done in a way that creates a stronger relationship between the patient and their healthcare providers.
| Bonnie: This is all very interesting. If readers want to learn more about AI, what would you suggest to them? I think this is like an ever-evolving topic in terms of risk and mitigation, like cyber liability was 10 years ago.
Meghan: Start wherever you’re comfortable and ask lots of questions. There are lots of ways to learn, whether through articles or podcasts or short online courses. I know a lot of people who have been learning by trying out ChatGPT or Bard and finding what’s helpful to them and where the boundaries are of what the tools can do - like “hallucinations” when they provide you with information that is inaccurate. Since overnight everyone is an AI “expert” I’d encourage you to start with sources that are reputable and that you trust.
I learn a lot by asking questions when I’m researching a potential vendor or learning a new tool. Much of the time, asking a handful of simple questions gives me a good sense of whether I want to keep finding out more or if I don’t think the tool will be useful. I often ask about what data the AI tool was built on and whether the data representative of my use case. For example, if it was built on data from a decade ago but hospital practices have changed significantly in the past few years, the tool may not work as well in today’s environment. If the data is small or focused on a different population, it may not work as well. I also ask about how the tool’s outputs were validated (how do they know it works), how they checked for bias, and how it is meant to be integrated into existing workflows (does it make things more simple or does it add five additional steps to a process?).
| Bonnie: Any final takeaways you’d like to leave with our readers?
Meghan: I’d say the key takeaway is to be engaged. AI isn’t a fad or just hype - it’s being used in many aspects of our lives today and that will only continue to grow. In fact, in January 2024 the White House announced the creation of an AI Task Force at the Department of Health and Human Services, a clear sign that AI is important for healthcare. Asking questions and learning as much as you can from reputable and trusted sources will go a long way to getting more comfortable with it and finding out ways it can be helpful to you in both your personal and work lives. The deep expertise those in healthcare have developed over decades is more important than ever, and we need people to be voicing feedback on how to improve where and when AI is used and raising concerns if you’re seeing potential bias or errors in the tools. There is real opportunity to make patient outcomes better and improve the ways we work and it will take all of us working together to make that a reality.
About the Authors

Meghan Anzelc, Ph.D. Is the Chief Data & Analytics Officer at Three Arc Advisory. She has two decades of experience in data and analytics, having previously served as Global Head of Data & Analytics at Spencer Stuart. She has a decade of experience in financial services, most recently as the first Chief Analytics Officer at AXIS Capital. Dr. Anzelc's global experience in data and AI have made her uniquely qualified to shape strategy at businesses adapting to new and emerging AI capabilities ethically while managing risk appropriately. She advises boards of directors and executive teams on AI, data, and digital transformation across strategy and operations, serves as an Advisor to startups, and previously served on the board and as chair of the Nom/Gov Committee of the Chicago Literacy Alliance.

Bonnita (Bonnie) Boone is a former insurance executive and the founder of Pro Vista Risk Consulting. She has over 39 years in experience developing markets, operational processes, sales/growth, and overall corporate strategy. She has been a specialist in healthcare, hospitals, doctors, managed care, and Academic medical centers, mainly focusing on captives, reinsurance, professional liability, and other lines of coverage.



Comments