On April 16, 2026, Sheppard’s Healthy AI team, led by Chicago partner Carolyn Metnick, hosted its inaugural Healthy AI Forum, bringing together healthcare leaders and industry stakeholders to explore current regulatory, governance, and operational considerations surrounding healthcare AI. In-house legal and insurance leaders joined Sheppard healthcare attorneys Carolyn Metnick, Katie O’Neill, Esperance Becton, and Christina Nguyen for in-depth panel discussions and breakout sessions on emerging issues facing healthcare organizations. The panels and breakout sessions created a focused opportunity to share best practices for leveraging AI to drive innovation in research and clinical operations while safeguarding patient trust, safety, quality, and privacy.
Panelists and attendees engaged in thoughtful discussions about how healthcare organizations can responsibly evaluate, govern, and deploy new and evolving AI Technologies. Conversations addressed a wide range of issues, including governance frameworks, transparency and patient education, strategic planning, vendor negotiations, and legislative advocacy.
A central theme throughout the forum was that successful AI adoption in healthcare depends, not only on technology, but also on fundamentally human considerations, including:
- Transparency with stakeholders – clinicians, families, leadership, patients, and payors;
- Communication, training and education, and trust-building with clinicians and patients;
- Ethical considerations related to governance, legislation, and patient protections; and
- Strategic, value-driven planning to support clinical, operational, and long-term organization goals in line with legislative priorities.
Panel 1: AI Governance and Regulatory Readiness
Carolyn Metnick, Sheppard Healthcare Partner | Nancy Paridy, Shirley Ryan AbilityLab, President and Chief Administrative Officer | Carl Bergetz, Rush University System for Health, Chief Legal Officer
The first panel explored how healthcare organizations are developing AI governance frameworks in the absence of comprehensive federal AI legislation. As AI technologies rapidly evolve and state-level regulation continues to emerge, healthcare organizations are increasingly relying on internal governance structures and cross-functional collaboration to manage risk and guide responsible adoption.
Panelists discussed how legal departments and executive leadership teams are working to stay informed about evolving legal and technological developments in order to support enterprise risk management, clinical risk management, and insurance strategy. The panelists discussed how AI governance cannot be siloed within IT or compliance functions alone. Instead, effective governance requires collaboration across legal, compliance, clinical, operational, and executive leadership teams. The panel also emphasized the importance of physician involvement in governance discussions, particularly when AI tools directly impact patient care, clinical decision-making, quality initiatives, or medical records management.
A key operational challenge identified by the panelists is the speed at which AI technologies are entering healthcare environments. Business teams and clinicians are exploring new AI tools faster than many organizations can formally evaluate them, creating pressure on legal and compliance teams to remain engaged without becoming barriers to innovation.
Panelists also highlighted the growing challenge of “shadow AI” – the use of unapproved AI tools outside formal governance pathways, such as clinicians using public platforms on personal devices for clinical support. Because IT teams cannot fully monitor or prevent all use, organizations must rely on governance, education, and clear usage expectations to protect patient data and trust.
Patient trust and transparency also emerged as major areas of focus. Public skepticism surrounding AI, combined with ongoing concerns about healthcare cybersecurity incidents and large-scale data breaches, has increased pressure on healthcare organizations to protect patient information and communicate clearly about how AI may affect patient care. Panelists noted that organizations should think beyond basic notice and consent requirements and instead consider how to meaningfully educate patients about the role AI plays in healthcare delivery.
The discussion also highlighted the limitations of existing healthcare privacy laws in addressing modern AI technologies. Current privacy frameworks, including HIPAA, were not designed to account for ways AI systems ingest, process, and learn from data. As a result, organizations are often operating in areas of legal uncertainty, making strong internal governance, ongoing risk assessment, workforce education, and thoughtful patient engagement and consent practices essential.
Finally, panelists discussed the growing importance of legislative engagement. As states continue developing healthcare AI legislation, healthcare organizations may benefit from proactively engaging with lawmakers and regulators to help ensure future regulatory frameworks reflect operational realities within healthcare systems.
Panel 2: Best Practices for AI Vendor Due Diligence and Contracting
Katie O’Neill, Sheppard Healthcare Attorney | Christopher Carlson, Aledale, Inc., Associate General Counsel & Privacy Officer | Lauren Edelman Willens, Henry Ford Health, Senior Counsel | Lydia Andrasz, Endeavor Health, System Assistant Vice-President and Associate General Counsel
The second panel focused on how healthcare organizations are approaching AI vendor diligence and contracting in an increasingly complex and rapidly evolving marketplace.
Panelists emphasized that AI vendor relationships now require legal teams to move beyond traditional contract review and engage in broader, cross-functional risk assessment alongside business and operational stakeholders. In evaluating new AI tools, legal teams are increasingly helping assess long-term privacy, cybersecurity, compliance, and operational risks, including pressure-testing vendors with limited track records (such as early-stage or pilot solutions). Legal teams also evaluate proposed solutions within the organization’s broader AI governance framework, including intended use, clinical versus non-clinical application, and scope of data, while ensuring timely, multidisciplinary engagement across relevant stakeholders.
The discussion underscored the importance of tailoring diligence to the specific risk profile of each AI solution. Key considerations include intended use, clinical versus non-clinical application, data sensitivity and scope, and alignment with enterprise AI governance policies.
Panelists highlighted practical diligence strategies, including evaluating data handling practices (including deidentification methods), vendor qualifications and certifications, insurance coverage, and overall operational and financial stability. Given the rapid influx of AI vendors into the healthcare market, they also emphasized the need to anticipate longer-term risks such as vendor solvency and data disposition if a relationship ends.
From a contracting perspective, panelists stressed the importance of maintaining standardized protections where possible – particularly for protected health information – through tools such as AI-specific security addenda, business associate agreements, and clearly defined provisions addressing data use, retention, security, and ownership.
Importantly, panelists noted that AI vendor oversight should not end once a contract is signed. Organizations should regularly reassess AI vendor relationships throughout the lifecycle of the engagement to evaluate evolving risks, changes in data use, scope creep, and compliance with governance expectations.
Panel 3: Strategic AI Risk Management
Esperance Becton, Sheppard Healthcare Associate | Kelly Greening, Ann & Robert H. Lurie Children’s Hospital, Vice-President and Deputy General Counsel | Lindsay Combs, Marsh, Senior Vice President
The third panel explored how healthcare organizations can operationalize AI governance within broader enterprise risk management and strategic planning efforts. Panelists emphasized that AI governance should align with existing legal, compliance, and risk-management structures while still allowing organizations to develop targeted oversight mechanisms for higher-risk AI use cases.
The discussion emphasized that effective AI governance requires cross-functional accountability among legal, compliance, IT, operational, and clinical leadership teams, particularly as organizations determine how to escalate and oversee higher-risk AI initiatives. Panelists also noted that governance structures must remain flexible as organizations continue evaluating whether existing oversight frameworks are sufficient for rapidly evolving AI technologies.
Panelists described a tiered, risk-based governance approach that included several key components:
- Standardized intake processes to assess the nature and level of risk associated with proposed AI tools;
- Cross-functional review by representatives from legal, compliance, risk management, IT, and clinical leadership;
- Three-tier approval pathways in which higher-risk technologies require escalation to executive leadership or governing boards; and
- Ongoing reporting and oversight to leadership and compliance committees.
A key theme throughout the discussion was that legal and compliance functions should enable responsible innovation rather than function exclusively as gatekeepers. As panelists noted, a reflexive “no” to AI adoption can ultimately impede organizational progress, rather than mitigate risk.
Consistent with this theme, panelists emphasized the importance of balancing innovation with oversight. Organizations that build practical, collaborative governance processes are better positioned to support safe AI adoption while managing operational, clinical, and regulatory risk.
The panel also addressed operational challenges associated with AI adoption, including the growing use of “shadow AI” tools outside formal governance channels. Panelists highlighted the need for workforce education, clear guidance on approved tools, and practical safeguards to reduce risks related to privacy, security, and data leakage from unauthorized generative AI use.
From an insurance perspective, panelists noted that while brokers and carriers have not yet materially changed underwriting questions related to AI, this is expected to evolve. Increasingly, insurers are focusing on governance maturity, cybersecurity safeguards, documentation practices, and enterprise oversight of AI-enabled technologies. As underwriting standards develop, organizations with mature governance frameworks may be better positioned in renewal and coverage discussions.
Panelists also observed that forward-looking organizations are already proactively documenting and sharing governance structures with insurers as part of renewal processes – not because it is required, but because demonstrating governance maturity is becoming a key marker of risk readiness.
Finally, panelists discussed the growing likelihood that certain AI-enabled tools will become embedded in clinical workflows and, over time, influence evolving standards of care. As AI adoption expands, healthcare organizations will need to continue addressing clinical risk, reliance on AI-generated outputs, and the need for appropriate oversight and validation frameworks to support responsible use.
Looking Ahead
As AI continues reshaping healthcare delivery, research, and operations, the forum underscored a clear takeaway: organizations that invest now in strong governance structures, rigorous vendor diligence, cross-functional collaboration, and proactive risk management will be best positioned to responsibly harness AI while protecting patient trust, safety, and privacy.
The discussions also highlighted the importance of education and transparency across all levels of the organization – from leadership and clinicians to patients and operational teams. At the same time, as states continue advancing AI-related legislation, healthcare organizations have an opportunity to engage with policymakers and help shape emerging regulatory frameworks.
Responsible AI adoption in healthcare is still in its early stages. Sheppard’s Healthy AI team looks forward to continuing these conversations with the healthcare leaders, legal professionals, and operational stakeholders driving this evolution.
Save the date of November 12, 2026 for the next Sheppard Healthy AI Forum, which will take place in Washington, D.C. and feature another group of outstanding healthcare AI thought leaders.