Section 1557 Patient Care Decision Support Tools: 12 Things to Consider

Section 1557 Patient Care Decision Support Tools, Anti-Discrimination Compliance: 12 Things to Consider

Overview


In April 2024, the US Department of Health and Human Services (HHS) issued a final rule reinterpreting Section 1557 of the Affordable Care Act, which prohibits discrimination on the basis of race, color, national origin, sex, age, or disability, or any combination thereof, in a health program or activity, any part of which receives federal financial assistance. This includes a general prohibition on discrimination through the use of patient care decision support tools, which becomes effective May 1, 2025 (i.e., 300 days from the rule’s effective date).

As Section 1557 covered entities (i.e., health programs or activities that receive HHS funding, health programs or activities administered by HHS, and the federal Health Insurance Marketplace) work to evaluate their patient care decision support tools in preparation for the new requirements, below we highlight 12 things covered entities should keep in mind.

In Depth


1. “Patient care decision support tool” is broadly defined. The HHS Office for Civil Rights (OCR) defines “patient care decision support tool” to mean any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities. The definition is intentionally broad and includes tools used to assess health status, recommend care, provide disease management guidance, determine eligibility, and more. The definition does not include tools that are unrelated to clinical decision-making, such as tools used for patient scheduling, supply chain management, automated medical coding, or staffing-related activities.

2. Covered entities must make “reasonable efforts” to identify tools that use input variables that measure race, color, national origin, sex, age, or disability. OCR does not describe how often entities must assess a tool for potential bias or discrimination. Rather, entities have an ongoing duty to “make reasonable efforts” to mitigate the risk of discrimination resulting from use of a tool in their health programs or activities.

3. Whether an entity engages in “reasonable efforts” will depend on the facts and circumstances. In evaluating whether a covered entity complies with the “reasonable efforts” standard, OCR may consider, among other factors:

  • The entity’s size and resources
  • Whether the entity used the tool in the manner intended by the developer and approved by regulators
  • Whether the entity customized the tool
  • Whether the developer advised the entity regarding potential for discrimination
  • Whether the entity has a process in place for evaluating the tool (e.g., governance).

OCR will consider compliance on a case-by-case basis. Some tools may warrant more frequent evaluation than others. For example, artificial intelligence (AI) models that learn and evolve after being deployed may require more frequent evaluation compared to static tools such as flowcharts.

4. When assessing tools for potential bias or discrimination, covered entities should consult reliable, publicly available resources. If a covered entity does not know whether a tool uses variables that measure race, color, national origin, sex, age, or disability, the entity should consult publicly available sources or request information from the developer. Entities may gain knowledge about a tool’s potentially discriminatory risks through media outlets, peer-reviewed studies, the tool developer, professional and hospital associations, health-insurance-related associations, subregulatory governmental guidance, and subsequent regulatory guidance and rulemaking. Entities should evaluate the tools and public resources as they would evaluate medical literature – by considering the underlying evidence and whether the resource has been peer reviewed, for example.

5. Document compliance with Section 1557. Entities should develop policies and procedures now for compliance with Section 1557. These policies and procedures should include a process for documenting the reasonable efforts planned and ultimately taken to mitigate discrimination. Options to consider include:

  • Developing checklists or forms to be completed each time a tool is assessed
  • Implementing policies to detect discriminatory impacts
  • Establishing procedures governing how an algorithm may be used
  • Training staff on the proper use of such systems and tools.

The rule requires general Section 1557 staff training following a covered entity’s implementation of policies and procedures, and no later than May 1, 2025.

6. A covered entity’s liability under Section 1557 is not related to a developer’s liability. Covered entities’ use of patient care decision support tools must independently comply with Section 1557 irrespective of whether a tool’s developer is also subject to Section 1557. Although some stakeholders requested that covered entities share liability with algorithm creators, or that OCR impose strict liability on the manufacturers of algorithms rather than end users, HHS declined to adopt such recommendations. Instead, each covered entity must ensure that its use of a tool does not result in discrimination.

7. ASTP/ONC predictive decision support intervention is also live. As part of the Health Data, Technology, and Interoperability (HTI-1) final rule, the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP/ONC) established a new regulatory framework for certain AI and machine learning technologies that support decision-making in healthcare, including some patient care decision support tools. Compliance requirements took effect on December 31, 2024. Section 1557 covered entities that are also certified health IT developers should review the use of AI in their Health IT Modules (as defined by ASTP/ONC) to verify that they meet the HTI-1 certification requirements. Covered entities also should evaluate whether patient care decision support tools received from health IT developers (that may not be subject to Section 1557) comply with Section 1557 before deploying the tools for live use with patients.

8. Additional ASTP/ONC patient care decision support tool regulations are likely coming. In September 2024, ASTP/ONC published its 2024 – 2030 Federal Health IT Strategic Plan. ASTP/ONC emphasized the key role that AI will play in future health IT regulations. The strategic plan points to the “rapid evolution” of machine-based systems that can influence decisions for a given set of human objectives. As AI-based patient care decision support tools are increasingly incorporated into health IT across the industry, ASTP/ONC will likely look to expand on the regulation of the HTI-1 final rule, and it may do so more expeditiously than inevitable revisions to Section 1557’s regulations. Section 1557 covered entities should monitor ASTP/ONC rules for potential foreshadowing of the Trump administration’s plans for patient care decision support tools under Section 1557.

9. Covered entities should continue to keep an eye on state law developments. While OCR and ASTP/ONC have begun to regulate patient care decision support tools, US Congress has yet to definitively act. As a result, several state legislatures have enacted or proposed legislation to independently regulate developers and deployers of patient care decision support tools. While enacted and proposed legislation in states such as Illinois, Colorado, and California aim to curb algorithmic discrimination, the approaches are sometimes vastly different. In the absence of a uniform approach, covered entities should ask certain questions to inform compliance with state laws:

  • Does your tool contain safeguards against algorithmic discrimination?
  • Could the data used to train the tool lead to inadvertent algorithmic discrimination?
  • Is there a process in place to evaluate whether algorithmic discrimination is taking place?
  • Is there appropriate documentation in place to show efforts to evaluate each of the foregoing?

10. Experience implementing the regulations may influence future rulemaking. In the Section 1557 final rule, OCR requested comments on whether it should engage in additional rulemaking to expand the scope of its prohibition on algorithmic discrimination, including other decision support tools that might not directly impact patient care and clinical decision-making but may nevertheless result in unlawful discrimination. As entities comply with the new requirements and gain experience in evaluating their use of decision support tools and documenting compliance efforts, entities should be mindful that any comments related to their experiences may shape future regulations.

11. “Ensuring nondiscrimination in the use of AI is Good Medicine.” In a blog post accompanying a January 10, 2025, “Dear Colleague” letter explaining how covered entities can safely introduce and use AI tools in their operations, HHS OCR Director Melanie Fontes Rainer observed that “discrimination in AI-driven health care tools – whether intentional or unintentional – can lead to uneven care, diminished trust, and financial risks, such as lawsuits and regulatory interventions.” OCR’s enforcement posture has been increasing in recent years, with penalties ranging from resolution agreements and compliance reviews to compensatory and punitive damages. Plaintiffs have also successfully brought private claims against Section 1557 covered entities for discriminatory actions.

 12. AI oversight has been a bipartisan effort. The AI genie seems to be out of the bottle. However, in December 2024, the Bipartisan House Task Force on AI released a 253-page report that articulated many potential harms that may arise through algorithmic decision-making, including bias, discrimination, and worker dislocation. Although the Trump administration is likely to revise or repeal Section 1557 regulations in their entirety, efforts that covered entities take toward meeting the May 1, 2025, patient care decision support tool compliance date will lay strong groundwork for meeting state and federal government expectations for use of AI as part of patient care.