Skip to main content

NIS2 and AI in Critical Infrastructure: Incident Reporting, Supply Chain Security, and Personal Liability

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

NIS2 (Network and Information Security Directive 2) became applicable across EU member states in October 2024. It significantly expands the scope of its predecessor: where NIS1 covered a relatively narrow set of critical infrastructure operators, NIS2 covers essential entities and important entities across 18 sectors including energy, transport, banking, financial market infrastructure, health, drinking water, digital infrastructure, ICT service management, public administration, and space.

If your organization operates in any of these sectors in the EU and uses AI agents, NIS2 requirements apply to those AI systems as part of your overall cybersecurity obligations.

Who NIS2 Covers

Essential entities — large organizations in highly critical sectors (energy, transport, banking, financial market infrastructure, health, digital infrastructure):

  • Operators of essential services
  • DNS service providers, TLD registries, cloud providers, data center operators, CDN providers, managed service providers, online marketplaces, online search engines, social networking platforms

Important entities — medium and large organizations in other critical sectors (postal, waste management, chemicals, food, manufacturing, digital providers, research)

The size thresholds: medium enterprises (50+ employees or €10M+ annual turnover) qualify. Essential entities are also large enterprises (250+ employees or €50M+ turnover).

For AI specifically: AI systems used by essential or important entities are part of the ICT infrastructure subject to NIS2's requirements, regardless of whether the AI system itself would independently qualify.

The Ten Security Measures

NIS2 Article 21 requires organizations to take appropriate and proportionate technical, operational, and organizational measures to manage cybersecurity risks. Specifically:

  1. Risk analysis and information system security policies
  2. Incident handling
  3. Business continuity — backup management, disaster recovery, crisis management
  4. Supply chain security — security in supplier/service provider relationships
  5. Security in network and information systems acquisition, development, and maintenance — including vulnerability handling and disclosure
  6. Policies to assess effectiveness of cybersecurity risk management measures
  7. Basic cyber hygiene practices and cybersecurity training
  8. Policies on use of cryptography and encryption
  9. Human resources security, access control, asset management
  10. Use of multi-factor authentication — or continuous authentication solutions

For AI agents: measures 1, 2, 5, 8, and 10 are directly implemented by UAPK's architecture:

  • Risk analysis → qualification funnel + framework evidence
  • Incident handling → audit log + evidence bundle
  • Vulnerability disclosure → governance config + version control
  • Cryptography → Ed25519 signatures, Fernet encryption, hash chain
  • MFA/continuous authentication → capability tokens (signed credentials with expiry)

Incident Reporting Timeline

NIS2's incident reporting requirements are strict:

TimingWhat's Required
24 hoursEarly warning to national CSIRT/competent authority — "significant incident" occurred
72 hoursIncident notification with initial assessment, severity, indicators of compromise
1 monthFinal report with full description, root cause, cross-border impact, measures taken

A "significant incident" is one that causes severe operational disruption or financial losses, or has affected/can affect other natural or legal persons by causing considerable material or non-material damage.

For an AI agent incident — the agent was manipulated into unauthorized actions, it caused a data breach, or it disrupted critical operations — the 24-hour early warning clock starts when the organization becomes aware of it.

Detecting within 24 hours requires continuous monitoring. The UAPK audit log, with real-time anomaly detection over interaction records, is the detection mechanism.

Supply Chain Security

NIS2 places significant emphasis on supply chain security. Organizations must assess the cybersecurity practices of their ICT service providers — which includes AI model providers, cloud platforms, and AI governance vendors.

This creates a direct obligation for organizations using third-party AI services: you must assess their security practices, include security requirements in contracts, and have exit plans if a provider's security posture degrades.

For self-hosted UAPK: supply chain security concerns are reduced because the governance infrastructure is under your direct control. The AI models your agents use are the supply chain risk to assess.

For SaaS AI governance: your governance provider is part of your supply chain. Their NIS2 assessment is your concern.

Management Body Liability

NIS2's most significant new provision: management bodies (boards, executive teams) of essential and important entities are personally liable for cybersecurity failures. Article 20 requires:

  • Management bodies must approve cybersecurity risk management measures
  • Management bodies must oversee their implementation
  • Management bodies can be held liable for infringements

Member states can provide for personal liability of top management for negligence in case of a cybersecurity incident.

This is the same personal accountability pattern as SOX Section 906, SMCR, and CMMC's self-attestation requirements. The trend across all major cybersecurity and AI governance regulations is toward individual accountability at the top.

For AI agents in critical infrastructure: the board must have approved the AI governance framework (the manifest policy), and that approval must be documented. The manifest version history — with an approval workflow that records who signed off on each version — is that documentation.

NIS2 + DORA: The Financial Sector Stack

For EU financial entities in critical infrastructure sectors, both NIS2 and DORA apply. The relationship:

  • DORA is lex specialis for financial sector entities — it takes precedence over NIS2 where they overlap
  • Financial entities subject to DORA are considered to comply with NIS2 incident reporting requirements if they report under DORA's framework
  • Non-financial aspects of NIS2 (supply chain security, management accountability) still apply alongside DORA

The practical outcome: financial entities build their AI governance to DORA's standard (which is stricter in most respects) and get NIS2 compliance as a byproduct.

The Energy Sector Case

An EU energy grid operator deploying AI for demand forecasting, grid balancing, or outage detection is an essential entity under NIS2. Its AI agents face:

  • The 10 security measures (all of them)
  • 24-hour incident reporting
  • Supply chain security obligations for AI model providers
  • Management body liability
  • Annual security audits (mandatory for essential entities)

The UAPK manifest for such an agent locks the jurisdiction to EU, restricts tools to pre-approved grid management systems, and puts all network configuration actions behind human approval — implementing NIS2's requirements structurally, not just procedurally.