According to a 2025 survey by the National Association of Insurance Commissioners, 84% of health insurers reported using AI and machine learning in some capacity. As AI increasingly influences coverage, navigation, and clinical decision support pathways, payers must demonstrate responsible AI governance through auditable controls, monitoring, and accountability.
Early healthcare AI focused largely on ethics related to fairness, transparency, and bias mitigation. While important, these principles described intent rather than operational responsibility. Health plans could endorse responsible AI governance without materially changing how technologies were evaluated, deployed, or monitored.
That is changing as organizations such as The Joint Commission (TJC) and the Coalition for Health AI (CHAI) are translating high-level responsible AI guidelines into practical expectations around validation, oversight, and auditability. When AI influences member care decisions, systems must demonstrate measurable performance, defined escalation pathways, and ongoing monitoring.
CHAI and TJC approach the same problem from different angles, arriving at similar conclusions. CHAI emphasizes human-in-the-loop clinical oversight, reinforcing that high-risk healthcare AI should support, not replace, clinical judgment. TJC focuses on accountability, safety, and quality management when technology influences patient care. Together, they signal that medical AI must operate within established clinical governance rather than under consumer technology oversight.
To address uncertainty about responsibility, the organizations issued joint guidance outlining operational controls. Organizations are expected to formalize internal oversight structures through defined governance processes:
They also set clear expectations for protecting patient information and maintaining secure use of data:
Finally, organizations must treat medical AI performance as a continuously managed quality function:
The implication is clear: medical AI is increasingly evaluated like other clinical infrastructure, with defined expectations for oversight, risk management, and continuous performance evaluation.
For health plans, these industry guidelines outline that, to safely deploy medical AI for member populations, several core capabilities are required. First, care decisions should remain clinically governed through:
Plans should also assess whether recommendations can be understood and trusted. This requires:
Finally, safety must be managed throughout the lifecycle. Performance monitoring, clinician feedback loops, and outcome tracking are required after deployment to identify drift, bias, or emerging failure modes over time. As these expectations become baseline practice, payers will need to prioritize medical AI solutions designed around responsible AI governance and demonstrate safety and compliance.
While some organizations may assume that risk controls can be introduced after a tool goes live, current guidance suggests otherwise. TJC and CHAI emphasize that protections must be built into the architecture and workflows from the outset. Payers must evaluate whether a solution includes foundational guardrails, such as:
When these guardrails function as core infrastructure, AI systems become easier to audit. This level of transparency supports regulatory review and enables plans to trust solutions that directly provide member care.
Counsel combines medical AI with physician oversight to safely expand access at scale while preserving clinical accountability. To meet the highest safety standards for enterprise partners, our platform leverages an agentic AI framework. Each agent can be individually evaluated, constrained, or updated. Additionally, each member interaction is monitored by independent safety agents to ensure every exchange meets clinical, legal, and payer-level standards for safety and compliance.
By partnering with Counsel, payers are able to advance their innovation strategies with a solution that meets the evolving compliance and governance standards emphasized by CHAI and TJC.
Request a demo of Counsel to see how AI-enabled primary care can be integrated to your existing platform via a white-labeled deployment, securely expanding access to your member population.
National Association of Insurance Commissioners. NAIC survey reveals majority of health insurers embrace AI. https://content.naic.org/article/naic-survey-reveals-majority-health-insurers-embrace-ai
Coalition for Health AI. Responsible AI guide. https://assets.ctfassets.net/7s4afyr9pmov/6e7PrdrsNTQ5FjZ4uyRjTW/c4070131c523d4e1db26105aa51f087d/CHAI_Responsible-AI-Guide.pdf
Joint Commission. The responsible use of AI in healthcare (RUAIH). https://digitalassets.jointcommission.org/api/public/content/dcfcf4f1a0cc45cdb526b3cb034c68c2
The Counsel Health editorial team is a multidisciplinary group of writers and editors dedicated to delivering clinically grounded, evidence-based health information. Their work is informed by real-world care delivery and guided by physician expertise, ensuring content is accurate, accessible, and trustworthy. By translating complex medical topics into clear, practical guidance, the team helps readers understand their health, explore care options, and make informed decisions in a rapidly evolving healthcare landscape.

Javier Monterrosa is a healthcare marketing leader who has spent his career driving growth across AI, metabolic health, interoperability, and EHR companies. He holds a Master’s in Analytics and has co-authored published research examining how strategic decisions shape business growth. Having grown up in Latin America, he is driven to partner with mission-driven teams committed to improving healthcare access and outcomes through responsible technology.
Our content is created for informational purposes and should not replace professional medical care. For personalized guidance, talk to a licensed physician. Learn more about our editorial standards and review process.