Operational Excellence

AI governance in hospitals. The control model for 2026

Published on
February 16, 2026

AI is no longer a side project.

It is now inside clinical notes, triage flows,authorizations, and patient messages.

When AI shapes care or operations, it becomes auditable.

Leaders must be able to explain what happened, who owned it,and what controls were in place.

This post gives hospital CEOs, COOs, CIOs, and CTOs a clear control model for 2026, across Switzerland and the EU.

It is built for real hospital life, not for a slide deck.

AI is already in your workflows. Yet, most hospitals just don’t see the full map yet

A lot of hospitals think they are “piloting AI.”

But AI is already there.

It is often embedded in tools youalready pay for. It appears as “smart” features, auto-summaries, decision support prompts, and automated next steps.

That creates a gap.

Teams use AI every day.

Leaders cannot always say where it runs, what data it uses, or what happens when it is wrong. That gap is where risk grows.

 

What “AI” means in hospital operations today

When this post says “AI,” it includes three buckets many hospitals already run.

If your team did not “buy an AI system,” but your vendorrolled out AI features, you still need governance.

 

From tool to system. Why governance must follow

BHM Healthcare Solutions is a US healthcare consulting firm that supports provider and payer organizations on strategy, operations, and performance improvement.

Their “2025 in Review” series shares what they areseeing across real-world AI deployments in healthcare.

BHM’s warning is simple: Leaders approved AI as a narrow productivity helper.

In practice, it became part of the system. It moved into documentation,utilization analysis, coverage logic, and admin workflows.

That creates new dependencies and new failure modes.

BHM points to patterns leaders are now seeing:

  • Operational dependency rises: If the tool fails, the workflow slows.
  • Ownership gets fuzzy: People use it, but no one “owns” it.
  • Rollouts beat oversight: Validation and escalation lag go-live.
  • Model drift becomes real risk: Quiet performance drops can go unnoticed.
  • Executive exposure grows: Outputs can be reviewed by auditors, clinicians, and patients.

Leadership test for 2026: stop measuring “how much AI you deployed.”

Start measuring “how clearly you can explain and prove control.”

 

EU pressure points. What changes in practice

EU AI Act. Oversight becomes a duty

The EU AI Act lays down rules for AI systems, including obligations for certain higher-risk uses.

It includes requirements tied to governance, documentation, and human oversight.

You do not need to be a lawyer to get the point.

If an AI system can affect health, safety, rights, oraccess, you need to show control. You need clear roles, documented limits, monitoring, and the ability to investigate.

EHDS. Your AI program becomes a data program

The European Commission states the EHDS Regulation entered into force on 26 March 2025, starting a transition phase toward staged application.

EHDS matters because AI scale depends on data.

If you cannot show where data came from, what permissions apply, and how it moves across systems, you will struggle to scale AI safely. You will also struggle to defend it when questioned.

Swiss pressure points. What applies now

Federal Council direction. Align, then adapt

On 12 February 2025, the Swiss Federal Council said Switzerland intends to ratify the Council of Europe AI Convention and adapt Swiss law as needed. It also stated work will continue on AI regulation inspecific sectors, including healthcare.

Switzerland may not launch one single “Swiss AI Act” right away.


Still, the direction is clear: international alignment, plus sector-ledrules.

Swiss data protection. Privacy rules apply now

Switzerland’s revised data protection framework has appliedsince 1 September 2023.

That affects real deployments today:

  • patient-facing messaging
  • documentation automation
  • analytics on patient data
  • any AI tool that processes personal data

Leader translation: if AI touches patient data, you need purpose limits, access controls, and privacy-by-design. You also need a transparency stance you can defend.

Medical device line. Some AI is regulated tech

The Swiss Federal Office of Public Health notes Switzerland revised medical device rules in line with EU MDR and IVDR, raising safety and quality expectations.

In plain terms: if an AI tool functions as medical device software in your establishment, treat it like regulated tech.

That means risk management, validation, change control, and documentation you can show under review.

DigiSanté: The runway for safe scale

DigiSanté aims to introduce standards, specs, and infrastructure to support seamless data exchange.

It also points toward asecure Swiss health data space direction and responsible secondary use.

 

Cross-border reality. Why EU standards still matter for Swiss providers

Even outside the EU, EU rules can still matter in practicewhen you:

  • deploy AI systems placed on the EU market by vendors you use
  • deliver cross-border services to EU patients or partner entities
  • take part in EU research networks or EU-based reuse setups
  • operate affiliates, clinics, or contracts inside EU member states

Also, the Swiss Federal Council signals an intent to align internationally. That increases the chance that “EU-grade” governance becomes the default expectation in partnerships and procurement.

Many Swiss providers adopt EU-style governance because vendors, partners, and patient flows pull them there.

 

Four questions your board will ask. Can you answer them?

Boards do not want buzzwords.


They want proof of control. Here are the questions thatmatter.

1) Where is AI influencing care, access, or patient-facing content? Is any of it higher-risk?

The EU AI Act includes obligations for AI uses that fall into higher-risk categories.

In healthcare, that can include tools thatinfluence triage, decisions, or patient-facing outputs.

What the board expects: a clear list of these usecases, named owners, and the controls in place.

2) Can we prove our data trail is clean, legal, andtraceable?

EHDS entered into force on 26 March 2025 and moves through a transition phase toward staged application.

The practical point is immediate: interoperability, permissions, and traceable provenance are needed for safe scale and for defensible use.

What the board expects: data sources mapped per use case, permissions are clear, and provenance is traceable.

3) Are we privacy-safe in Switzerland, today?

Swiss data protection has applied since 1 September 2023.

That shapes what you can do right now with patient data and AI-enabled workflows.

What the board expects: purpose limits, access controls, and a transparency stance that can be explained simply.

4) Are we treating regulated AI like a “feature”?

If an AI tool functions as medical device software in your setting, it needs stronger controls.

Validation, risk management, and change control are central.

What the board expects: clear classification thinking, and proof that higher assurance applies where needed.

 

The only risk model you need to start

You do not need a complex framework to begin.

Start with four tiers. Use them to decide what to govern first.

Rule of thumb: Tier 1 and Tier 2 must have named owners, written rules of use, monitoring, and an evidence pack ready.

 

The safety failures that show up first

Plan for common failures, not rare edge cases.

These are the ones leaders should expect early:

  • wrong discharge instructions or wrong follow-up steps
  • medication advice errors (dose, interactions, contraindications)
  • biased documentation (AI smooths narratives and drops uncertainty)
  • automation complacency (staff trust confident outputs too quickly)
  • silent drift (accuracy drops after updates or population shifts) 1

If AI touches patient comms, documentation, or triage, these failures become clinical and reputational risk, not just “tech bugs.”

 

The 90-day plan to get control fast

This plan is built to fit hospital reality. It focuses on Tier 1 and Tier 2 first.

1) Map every AI touchpoint, including hidden features

  • Create an AI register with every tool and embedded feature
  • Include EHR add-ons, dictation, portals, call centers, revenue cycle
  • Tag each item by tier and by domain (clinical, ops, finance, patient-facing)
  • Flag anything that touches triage, notes, discharge text, coding, authorizations, patient communication

Output you want by day 30: one list that is complete enough to defend.

2) Assign accountable owners. Name people, not functions

For Tier 1 and Tier 2, assign named owners for:

  • clinical safety and appropriateness
  • operational performance and continuity
  • privacy and security (data protection, access, vendor risk)
  • assurance (validation, monitoring, drift, audit readiness)

Also document who holds final decision accountability when humans and AI interact.

Output you want by day 45: a one-page ownership map.

3) Write one-page rules of use for every Tier 1 and Tier 2 tool

For each tool, write one page:

  • what it can do, and what it must not do
  • where it runs (module, pathway, department)
  • oversight points (review thresholds, co-sign, second-check triggers)
  • escalation steps. “If it looks wrong, do X, then Y, then Z”
  • patient transparency stance (what you disclose, where, how) 1

Output you want by day 60: rules of use for your highest-risk tools.

4) Validate before scale, then monitor like a clinical pathway

  • Run pre go-live tests with acceptance criteria
  • Check accuracy and robustness, check bias where relevant
  • Choose 3 to 5 KPIs for Tier 1 and Tier 2:
    • override rate
    • rework burden
    • incident rate
    • complaint signals
    • drift indicators 1
  • Review weekly in month 1, then monthly for Tier 1 and Tier 2

Output you want by day 75: One simple dashboard and a review cadence.

5) Run AI incident response like quality and safety

  • define “AI incident” in plain terms
       
    • unsafe advice
    •  
    • privacy event
    •  
    • systemic misinformation
    •  
    • workflow disruption 1
  • Enable rapid pause and rollback
  • Log incidents in your quality and safety system
  • Close the loop with corrective actions 1

Output you want by day 90: a runbook, plus one table top drill.

6) Tighten procurement and vendor controls

For Tier 1 and Tier 2 tools, ask for:

  • purpose and limits
  • update cadence
  • monitoring approach
  • audit logs and investigation support

Add contract language for:

  • change management
  • security duties
  • data processing roles

Also confirm if the use case could be medical device software in your context, and apply higher assurance where needed.

7) Strengthen your data foundations

  • Map data sources and permissions per use case
  • Track provenance and lawful basis for reuse
  • Prioritize interoperability and data quality work tied to your top use cases

This aligns with EHDS direction on data governance and reuse, and with Swiss work aimed at better exchange.

 

What to show auditors, boards, and partners


If someone asks, “prove control,” you should be able toshow:

  • AI register (what, where, why, tier)
  • named owners (clinical, ops, privacy, assurance)
  • rules of use for Tier 1 and Tier 2 tools
  • validation records and go-live criteria
  • monitoring dashboard and review cadence notes
  • audit logs and access controls
  • incident response runbook, plus one drill record
  • vendor update approvals and change notes
  • medical device classification stance where relevant
  • data mapping and permissions for top use cases

 

Next step. Get this under control in 90 days

Want this under control in 90 days?

Let’s run an AI Governance Sprint built for hospitals, not theory.

In 2 to 3 working sessions, we will deliver:

  • an AI register (including hidden vendor features)
  • risk tiers for every use case
  • named owners (clinical, ops, privacy, assurance)
  • one-page Rules of Use for Tier 1 and Tier 2 tools
  • a monitoring dashboard with a review cadence
  • an incident response runbook with a rollback drill plan

Share the 5 AI use cases you worry about most.

I’ll translate them into a practical sprint scope, with owners, checkpoints, and a clear first set of controls.

 

References

1.      BHM Healthcare Solutions, “AI governance, not adoption, is now a key leadership test” (2025 in Review, Part 3). [https://bhmpc.com/2026/01/2025-in-review-part-3/ ]

2.      European Union, Artificial Intelligence Act, Regulation (EU) 2024/1689, official EUR-Lexpage. [https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng]

3.      European Commission, “European Health Data Space Regulation (EHDS)” timeline. Entered into force 26 March 2025. [https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space-regulation-ehds_en

4.      Swiss Federal Council, “AI regulation: Federal Council to ratify Council of Europe Convention,” 12 February 2025 (news.admin.ch). [https://www.news.admin.ch/en/nsb?id=104

5.       KMU.admin.ch, “New Federal Act on Data Protection (nFADP),” applicable since 1 September 2023. [https://www.kmu.admin.ch/kmu/en/home/facts-and-trends/digitization/data-protection/new-federal-act-on-data-protection-nfadp.html]

6.      Swiss Federal Office of Public Health (FOPH), “Medical devices legislation,” revised in line with EU MDR and IVDR. [https://www.bag.admin.ch/en/medical-devices-legislation]

7.      digital.swiss, DigiSanté programme description, standards and infrastructure for data exchange and a Swiss health data space direction. [https://digital.swiss/en/action-plan/measures/design-of-programme-to-promote-digital-transformation-in-the-healthcare-sector]

 

FREQUENTLY ASKED QUESTIONS

1) What counts as “AI” in this control model?

Anything that generates, predicts, recommends, or triggers automated next steps inside hospital workflows.

That includes:

  • GenAI: drafting discharge instructions, summarizing notes, generating patient messages
  • Machine learning: risk scores, prediction models, routing and prioritization
  • Embedded vendor AI: “smart” EHR features, coding suggestions, revenue cycle automation, call center guidance, scheduling optimization

Rule of thumb: if a feature can influence care, access, documentation, or patient-facing content, put it in the AI register and govern it.

2) We didn’t “buy AI”. Do we still need AI governance?

Yes. Many hospitals run AI because vendors ship it inside tools you already pay for—often as auto-summaries, prompts, coding helpers, or workflow automation.

Governance is not about what you purchased. It is about what influences decisions and outputs in your environment.

If leaders can’t say where AI runs, what it uses, and what happens when it’s wrong, risk grows fast.

3) What’s the fastest way to decide what to govern first?

Use the tier model in the post and start with the top two tiers:

  • Tier 1: patient-facing and decision-influencing
  • Tier 2: workflow-directing or financially material

Then do four things first: name owners, write Rules of Use, set monitoring, and prepare the evidence pack for Tier 1–2 tools.

4) What’s the minimum we should be able to show our board or auditors?

A simple “proof of control” pack that you can pull up in one meeting:

  • AI register (what, where, why, tier)
  • Named owners (clinical, ops, privacy, assurance)
  • Rules of Use for Tier 1–2 tools
  • Validation records and go-live criteria
  • Monitoring dashboard + review cadence notes
  • Audit logs and access controls
  • Incident response runbook + one drill record
  • Vendor update approvals and change notes
  • Medical device classification stance where relevant
  • Data mapping and permissions for top use cases

If you can show these, you can answer most credibility questions quickly.

5) How do we handle AI safety without slowing down operations?

Don’t build a new bureaucracy. Embed AI control into existing routines.

Practical moves that work in real hospital life:

  • Use a standard one-page Rules of Use template for Tier 1–2 tools
  • Review 3–5 KPIs on a set cadence (weekly first month, then monthly)
  • Route AI incidents through your quality and safety process
  • Require vendors to support logging, investigation, and change control

This keeps innovation moving while keeping risk visible and managed.

6) When do EU rules matter for Swiss hospitals?

EU standards can matter in practice when you:

  • deploy AI systems placed on the EU market by vendors you use
  • deliver cross-border services to EU patients or partner entities
  • participate in EU research networks or EU-based data reuse setups
  • operate affiliates, clinics, or contracts inside EU member states

Even when not strictly required, many Swiss providers adopt EU-grade governance because vendors, partners, and patient flows pull them there.

7) Who should own AI governance inside the hospital?

AI governance needs shared ownership, but accountability must be named for Tier 1–2 tools.

A workable split:

  • Clinical owner: safety and appropriateness
  • Operational owner: performance and continuity
  • Privacy/security owner: data protection, access, vendor risk
  • Assurance owner: validation, monitoring, drift, audit readiness

Rule of thumb: if you can’t name the owner in 10 seconds, you don’t have governance yet.

The Bee'z Team

Transform Your
Healthcare Organization

Are you in one of these situations ?

A hospital looking for increased performance, frictionless digital processes and/or optimized infrastructure management…
A clinic suffering from evolving healthcare demands and a competitive markets...
A healthcare institution facing chronic understaffing, poor retention and/ or declining wellbeing of their specialized healthcare workers...

These challenges are not new, we all know it.Discover how our agile and collaborative solutions can drive growth and improve patient outcomes.

Discover Our Latest Blogs

Stay informed with our insightful blog posts.
View all
Operational Excellence

AI governance in hospitals. The control model for 2026

AI is already inside hospital workflows. Learn how CEOs, COOs, CIOs, and CTOs can govern it in Switzerland and the EU, and build proof of control fast.

Read more
Healthcare

Your patient experience problem starts before the patient enters the room.

Patient experience starts before the bedside. Fix staff-to-staff handovers with two simple habits that cut friction, boost clarity, and build trust fast.

Read more
Operational Excellence

The Real Shortage Is Time With Patients. “Thrive” Restores It.

The real shortage is time with patients. “Thrive” is the missing lever. Fix daily workflow friction so hiring and retention finally pay off.

Read more