8–9 Dec

Event: Meet Inovaare at HICE 2025! Let’s Shape the Future of Compliance Together.
Date: December 8–9, 2025

Reserve your spot now Reserve your spot now

Complimentary Access: Gartner® Hype Cycle™ for U.S. Healthcare Payers, 2025

Explore more Explore more

CMS program audit readiness playbook

Download now Download now
Blog

CMS Policy Brief on AI for Payers

Date
Share
CMS issues new guardrails on AI

CMS Sets New Guardrails for AI in Medicare Advantage

The New Regulatory Landscape: CMS Guidance on AI and Algorithms

While artificial intelligence (AI) and algorithmic tools offer significant potential to create efficiencies in healthcare administration, recent guidance from the Centers for Medicare & Medicaid Services (CMS) introduces critical compliance obligations for all Medicare Advantage (MA) organizations.

This trend has moved from the theoretical to the courtroom, with major insurers like UnitedHealthcare, Humana, and Cigna facing lawsuits over allegations that they used AI tools to wrongfully deny care to Medicare Advantage members. In response to this growing scrutiny, the CMS has issued new guidance, clarifying how these powerful technologies can, and, more importantly, cannot, be used for Medicare Advantage plans.

This guidance is a direct response to increasing scrutiny of payer practices. These legal challenges underscore the significant financial and reputational risks associated with non-compliant AI implementation, making a proactive internal policy essential.

The core of the CMS guidance presents a critical distinction: MA organizations are permitted to use AI and algorithms to support coverage decisions, but they are strictly prohibited from using these tools to supplant established coverage requirements and non-discrimination rules.

To navigate this complex environment, it is crucial to first understand the foundational definitions and principles set forth by CMS that govern all subsequent operational protocols.

CMS’s Foundational Principles: 4 Key Takeaways

A compliant AI implementation strategy must be built upon a clear understanding of CMS’s foundational principles. These principles move beyond specific rules to articulate the core philosophy of regulatory oversight, forming the bedrock of any internal governance framework. Adherence to these concepts is non-negotiable for any MA organization using algorithmic tools.

Takeaway 1: Distinguishing AI from Algorithms

To bring order to a rapidly evolving technological landscape, CMS has drawn a clear line between different types of automated tools.  CMS clarifies the distinction between these often-interchanged terms. An algorithm can be a relatively simple “decisional flow chart of a series of if-then statements,” whereas AI is defined as a more complex “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.” By drawing this distinction between deterministic algorithms and predictive AI, CMS is future-proofing its regulations. This prevents payers from obscuring simple decision trees behind the buzzword of “AI” and ensures that, as machine learning models become more complex, they will face a higher degree of scrutiny.

Takeaway 2: Individual Member Case Is What Counts, Not Big Data

CMS allows payers to use algorithms to support decision-making but places full responsibility on insurers to vet and validate each tool before it affects member care. The guidance reinforces a key rule: every Medicare Advantage coverage decision must rely on the individual member’s circumstances, their medical history, physician recommendations, and clinical notes. This requirement serves as a firewall that protects each patient’s health journey from being overshadowed by population-level data or generalized models.

Takeaway 3: AI Can Inform, But It Can’t Terminate Care on Its Own

CMS has placed firm restrictions on how automated systems can be used at critical junctures of care, particularly for post-acute services and inpatient admissions.

  • Post-Acute Services: Algorithms may only be used to help predict a member’s potential length of stay. They cannot be the basis for terminating coverage for these services. Before a termination notice can be issued, the member must be re-examined by a clinical professional.
  • Inpatient Admissions: Algorithms and AI tools cannot be the sole reason to deny an inpatient admission or to downgrade a patient’s status to an observation stay. These decisions require clinical judgment based on the individual’s condition.

Takeaway 4: No Secret Rules or Shifting Standards Allowed

To enforce transparency and fairness, insurers must ensure their automated tools are utilized in combination with posted coverage criteria which has been made publicly available. Denials of basic benefits are permissible only for clearly defined administrative reasons, such as network limitations or noncompliance with prior authorization rules, not based on proprietary or non-public criteria generated by an algorithm’s predictive analysis.

According to CMS, algorithms should only be used to ensure compliance with a plan’s existing internal coverage criteria. The agency established two clear prohibitions:

1. Predictive algorithms cannot apply internal coverage criteria that are not made public.

2. AI cannot be used to shift or alter a plan’s coverage criteria over time.

In the Frequently Asked Questions (FAQ) memo issued in February 2024, CMS, acknowledging that algorithms can codify and scale human biases at an unprecedented rate, issued a direct warning about the technology’s potential to exacerbate health inequities: “We are concerned that algorithms and many new artificial intelligence technologies can exacerbate discrimination and bias… MA organizations should, prior to implementing an algorithm or software tool, ensure that the tool is not perpetuating or exacerbating existing bias, or introducing new biases.”

Strategic Imperatives for Leadership

The CMS guidance should be viewed not as a mere compliance burden, but as a set of strategic imperatives for healthcare administrators, compliance officers, and IT leaders. A proactive, transparent, and ethical approach to AI is a powerful risk mitigation necessity and a competitive differentiator in a market demanding greater accountability.

Key Action Items for MA Leadership

  1. Establish a Cross-Functional AI Governance Committee: Create a dedicated oversight group that includes compliance, clinical, legal, and IT leaders. Give this committee the authority to review and approve every algorithmic tool before deployment. Its top priority is to meet CMS’s mandate by proactively identifying and mitigating potential bias. The committee should also define and enforce a formal validation process for each tool, maintain a complete documentation trail for audits, and confirm that bias mitigation is built in before any tool goes live.
  2. Audit All Existing Algorithmic Tools: Conduct an immediate and thorough audit of all current AI and algorithmic tools against the specific operational guardrails outlined in Question 2 of the Frequently Asked Questions (FAQ) memo issued in February 2024, “Do the new rules on clinical coverage criteria for basic Medicare benefits mean that MA organizations cannot use algorithms or artificial intelligence to make coverage decisions?”. This audit must pay special attention to tools used in post-acute care and inpatient admission processes to ensure they are not the sole basis for adverse coverage determinations.
  3. Enhance Transparency Protocols: Review and strengthen internal transparency procedures. Make all coverage criteria used by algorithms publicly accessible, as CMS requires. Clear disclosure eliminates the risks of “black box” decision-making and demonstrates compliance with CMS’s expectations for fairness and accountability.

Embedding these principles into the governance model goes beyond regulatory compliance. It builds member trust, strengthens market leadership, and protects your organization’s long-term viability in an AI-driven healthcare landscape.

The new CMS guidance on AI sends a clear signal: innovation must serve member interests, not compromise them. By defining terms, demanding transparency, and prioritizing each member’s clinical needs over algorithmic predictions, CMS ensures that human judgment continues to steer digital decisions.

As CMS expands oversight of algorithmic decision-making, payers must stop treating AI governance as an afterthought. The real opportunity lies in building compliance, transparency, and ethics directly into every layer of AI operations. Modern, healthcare-native platforms make this possible by giving payers AI governance frameworks, audit-ready workflows, and traceable data models that keep compliance and care aligned.

If you’re exploring how to bring responsible AI into your Medicare Advantage operations, now is the time to start. Talk to our team to learn how Inovaare helps health plans balance innovation with integrity.

Explore our AI-driven healthcare solutions

Struggling with compliance burdens, operational delays, or data gaps?

Discover how Inovaare’s SaaS-based payer solutions, built on its AI-powered platform,
help health plans streamline processes, reduce risk, and improve member outcomes.

Scroll to Top