Blog

HHS Asks the Public: How Can Federal Action Help Accelerate AI Use in Clinical Care?

March 6, 2026
Estimated Read Time: 4 mins

In January 2026, the U.S. Department of Health & Human Services (“HHS”) and the Assistant Secretary for Technology Policy / Office of the National Coordinator for Health Information Technology (“ASTP/ONC”) published a Request for Information (“RFI”) aimed at accelerating the adoption of artificial intelligence (“AI”) in clinical settings. Consistent with the priorities outlined in the White House’s AI Action Plan and recent Executive Orders on AI, the RFI seeks public feedback on “the actions [HHS] can take to establish a forward-leaning, industry-supportive, and secure approach to accelerate the adoption and use of AI as part of clinical care.” The deadline for public comments was February 23, 2026, and nearly 500 submissions were received by the close of the comment period.

Areas of Focus

HHS and ASTP/ONC are explicitly looking for “concrete, experience-based feedback” from a range of stakeholders — particularly developers of AI tools, providers and businesses currently leveraging AI in clinical settings, and entities facing barriers to adoption. As stated in the RFI, public input will inform an HHS-wide policy strategy built around three core approaches: regulation, reimbursement, and research & development.

  • Regulation. The RFI seeks comments on how current HHS regulations affect AI adoption and use in clinical care, and what regulatory adjustments could better support appropriate deployment.
  • Reimbursement. The RFI also seeks views on payment policy changes that would give payers both the incentive and ability to promote access to high-value AI clinical interventions, foster competition among AI tool developers, and accelerate access to — and affordability of — AI tools used in clinical care.
  • Research & Development. Finally, the RFI solicits ideas on how HHS could invest in R&D — including through public-private partnerships and cooperative research and development agreements — to integrate AI into care delivery and create new, long-term market opportunities, with the overarching goal of improving health and wellbeing.

Additional Questions

Beyond these three general inquiries, the RFI probes the limits of private-sector AI innovation and real-world use in clinical care, and asks what the federal government could change or support to enable effective adoption. Stakeholders are invited to comment on: barriers to adoption; regulatory, payment, and program changes HHS should prioritize; legal and implementation issues (including liability, privacy, and security); methods for evaluating AI before and after deployment (including metrics, robustness, and workflow integration); and any support mechanisms (such as grants, contracts, cooperative agreements, or prize competitions) or private-sector processes (such as certification, accreditation, or testing) that would be most impactful.

According to HHS and ASTP/ONC, the goal is to identify how these approaches can be applied to support rapid adoption of AI and interoperability in clinical care, while also fostering public trust and improving health outcomes for patients and communities.

Stakeholders Weigh In

Commenters include professional associations, accrediting organizations, health systems, policy-focused organizations, and AI developers responding from applied clinical experience. Across the submissions, commenters repeatedly describe adoption barriers rooted in data and interoperability limitations as well as organizational capacity constraints. Several emphasize that effective clinical AI depends on data that can be exchanged and meaningfully compared across settings. Others stress the need for representative data and attention to context mismatch — to avoid bias and misinterpretation when tools are deployed outside the environment in which they were developed.

On policy priorities for HHS, commenters have commonly pointed to aligning regulation, reimbursement, and R&D with the realities of deployment. Multiple submissions argue that a key obstacle is regulatory uncertainty for clinical AI that falls outside traditional medical-device pathways, which can slow or block deployment due to unclear expectations around accountability, liability, and post-deployment monitoring. Several recommend clearer federal guidance on validation, verification, and ongoing monitoring. On the payment side, many commenters describe misaligned incentives, noting that AI's value is often realized through efficiency, prevention, burden reduction, and care coordination — benefits that may not be adequately captured under existing payment models.

Finally, several submissions urge HHS to support an evaluation and trust infrastructure (including standardized reporting through the use of “model cards,” shared benchmarking and evaluation resources, and accreditation guidance) that addresses patient-facing issues such as transparency about AI use and concerns related to privacy and security.

Takeaways

In the RFI, HHS and ASTP/ONC frame clinical AI adoption as a policy and implementation challenge as much as a technical one, and the public comments largely reinforce that view. Healthcare providers should be aware that HHS action is likely to focus on making deployment easier and more scalable — specifically, by clarifying expectations for validation and ongoing monitoring, improving data access and interoperability, and better aligning payment policies with AI’s demonstrated value in efficiency, prevention, and care coordination. If executed effectively, these actions could help providers adopt AI with greater confidence while improving patient outcomes and clinician experience.

Tags: Department of Health and Human Services

Disclaimer: This alert is provided for information purposes only and does not constitute legal advice and is not intended to form an attorney client relationship. Please contact your Sheppard attorney contact for additional information.

Share Via: