For years, AI sat in the “innovation” or “IT” bucket. By 2026, that changes decisively. Regulation and investor pressure now make AI a board-level risk, not just a funding decision. It no longer lives only in strategy decks. It now shows up in daily operations, internal controls, audit trails, and formal board reporting, where legal accountability is actually tested.

This article explains what that shift means in practice for boards in Switzerland and the EU — and what must be in place before 2026 to remain defensible.

From “interesting use case” to a regulated system

The EU AI Act formally entered into force in August 2024. Its implementation is phased in from 2025 to 2027.

  • Early 2025: bans on prohibited AI systems and general provisions take effect.
  • August 2025: key governance rules for general-purpose AI (GPAI) apply.
  • August 2026: most obligations for high-risk AI systems enter into force under the base legislation.
  • Through 2027: additional high-risk requirements, especially those embedded in regulated products, are scheduled to apply.

In parallel, the European Commission’s recent “Digital Omnibus” proposal includes a possible delay of parts of the high-risk regime by up to 16 months, which could push some obligations closer to late 2027. Even with this proposal, the governance shift for boards is already underway.

The system is risk-based:

  • Unacceptable risk systems (for example, social scoring) are banned.
  • High-risk systems—such as AI used in credit scoring, recruitment, biometric identification, and safety-critical products—carry the heaviest compliance burdens.

These are not “documentation exercises.” High-risk AI must sit within a risk-management system, use quality-controlled data, undergo testing, maintain logs, ensure human oversight, and be subject to post-market monitoring.

Boards don’t need to write model cards. But they will be asked why the company runs high-risk AI with no risk register, no governance policy, no clear owner, and no evidence of monitoring.

What the EU AI Act demands from boards

The AI Act does not spell out “board duties” article by article. But it creates obligations that only work if governance changes at the top.

Corporate legal and risk teams are already advising boards to take four practical steps before the regime fully bites:

  1. Identify and classify AI systems: You cannot govern what you cannot see. Boards increasingly expect management to maintain an AI inventory aligned with risk categories, and to update it as new tools roll out.
     
  2. Create an AI risk register: For high-risk and business-critical AI, many organisations are introducing AI-specific risk registers covering purpose, data sources, vendors, bias risk, security, regulatory exposure, controls, and accountable owners.
     
  3. Anchor AI governance policies: A growing number of organisations are adopting board-approved AI policies that define boundaries, escalation rules, human-oversight requirements, and documentation standards. These often reference OECD AI Principles and emerging board-level AI governance frameworks.
     
  4. Strengthen vendor due diligence: Many high-risk systems are bought, not built. Boards increasingly expect AI-specific contractual safeguards, assurance reports, and audit rights, not just technical demos.

A practical example: a Swiss–EU fintech using AI for credit scoring will need to show regulators and investors that it knows where the model comes from, what data it was trained on, how bias is monitored, and who is accountable. The board does not tune the model, but it is responsible for ensuring a structure exists.

Switzerland’s hybrid approach: Alignment without copy-paste

Switzerland has not yet adopted a dedicated AI Act. Instead, it relies on a technology-neutral, sector-specific regulatory approach, guided by general corporate law, data protection rules, and sector regulators, most notably FINMA for financial services.

Three points matter for boards of Swiss and foreign groups with Swiss entities:

  • EU rules still apply to many Swiss companies. The EU AI Act has extraterritorial reach. A Swiss company placing AI systems on the EU market, or using AI in products and services targeting EU users, can fall within scope even if the group is headquartered in Zurich or Zug.
     
  • Swiss regulators are sharpening expectations. FINMA and other authorities have already clarified that AI risks fall under existing governance, risk-management, and accountability obligations. Banks, insurers, and asset managers using AI in trading, underwriting, KYC, or surveillance will be expected to demonstrate control, not just innovation.
     
  • The Council of Europe AI Convention will influence Swiss law. Switzerland signed the Council of Europe’s AI Convention on human rights, democracy, and the rule of law in 2025. This does not instantly create an AI statute—but it commits Switzerland to integrate principles on transparency, accountability, and redress into domestic law over time.

In practice, Swiss boards that dismiss AI as “only an EU issue” risk being caught unprepared. Many groups are already aligning internal governance with EU-style expectations ahead of formal Swiss legislation.

What boards should be put in place by 2026

By 2026, leading boards across Switzerland and the EU are likely to be doing four things differently.

  1. Treat AI as a board-level risk class. This does not require a new committee in every case. But it does require regular board visibility on where AI is used, which systems are high-risk, and how incidents are handled. Many large groups already route AI oversight through the risk or audit committee.
     
  2. Build a cross-functional AI governance structure. Legal, IT, data, risk, compliance, and business units must operate under a single governance map. Fragmented ownership is what regulators and investors now distrust.
     
  3. Document decision-making and oversight. AI regulation is documentation-heavy. Boards increasingly rely on policies, risk registers, training logs, and governance minutes to demonstrate structured oversight. These records serve both compliance and future legal defence.
     
  4. Link AI governance into existing control frameworks. AI cannot live in a silo. It must integrate with enterprise risk management, internal-control systems, sustainability reporting, and G20/OECD corporate-governance standards.

Consider a multinational headquartered in Switzerland with operations in Germany and France. By 2026, its board will be expected to know which AI systems in HR, pricing, credit, and ESG reporting are potentially high-risk under the AI Act, how governance operates across the group, and where regulatory exposure sits by entity.

Turning compliance into strategic clarity

For many boards, AI governance initially feels like another compliance burden layered onto ESG, CSRD, AML, and data protection. That view misses the strategic upside.

Handled well, AI governance can:

  • Clarify accountability inside complex groups.
  • Force discipline in data, model selection, and vendor controls.
  • Build trust with regulators, investors, and customers.
  • De-risk innovation rather than slow it down.

The boards that will be trusted to scale AI in 2026 are not the ones that use the most AI. They are the ones that can demonstrate, clearly and simply, who is responsible, what is controlled, and how risk is managed across Switzerland, the EU, and every boundary in between.

Final word

By 2026, AI will no longer be judged only by what it can do, but by how it is governed. Boards in Switzerland and the EU are entering a phase where AI accountability is no longer abstract. It is documented, auditable, and increasingly tied to liability, investor confidence, and market access.

The companies that succeed will not be the fastest adopters of AI. They will be the ones that integrate it most responsibly into their governance, risk, and control frameworks, across borders, regulators, and operating models.

This is where SIGTAX operates at the board level: helping companies translate complex AI regulation into defensible governance structures, cross-border compliance strategies, and operating models that regulators, investors, and counterparties can trust. In the AI era, competitive advantage will belong to those who pair innovation with governance discipline.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.