Skip to main content

NIS2 and AI in Critical Infrastructure: Incident Reporting, Supply Chain Security, and Personal Liability

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

NIS2 (Network and Information Security Directive 2) became applicable across EU member states in October 2024. It significantly expands the scope of its predecessor: where NIS1 covered a relatively narrow set of critical infrastructure operators, NIS2 covers essential entities and important entities across 18 sectors including energy, transport, banking, financial market infrastructure, health, drinking water, digital infrastructure, ICT service management, public administration, and space.

If your organization operates in any of these sectors in the EU and uses AI agents, NIS2 requirements apply to those AI systems as part of your overall cybersecurity obligations.

UK AI Regulation: The FCA, ICO, and the Principles-Based Approach After Brexit

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The UK made a deliberate choice not to copy the EU AI Act. After Brexit, the government opted for a cross-regulator, sector-specific, principles-based approach to AI regulation — lighter-touch by design, aiming to position the UK as a pro-innovation AI jurisdiction.

In practice, "lighter-touch" doesn't mean "ungoverned." It means the rules live inside sector regulators — the FCA, ICO, PRA, CMA — rather than in a single prescriptive statute. For AI teams building products for the UK market, understanding this distributed regulatory structure is essential.

CCPA/CPRA and AI Agents: California's Consumer Privacy Rights in Automated Systems

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The California Privacy Rights Act (CPRA) went into full effect January 1, 2023, amending and strengthening CCPA. Among the most significant additions for AI teams: explicit rights around automated decision-making and profiling — the closest the US has come, at the state level, to GDPR Article 22.

If your AI agents process personal information of California residents, CCPA/CPRA applies regardless of where your company is incorporated. California has approximately 39 million residents and is the fifth-largest economy in the world. Treat it as its own regulatory jurisdiction.

Compliance Framework Monitoring: Keeping Your AI Agent Policy Current as Regulations Change

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

Compliance is not a one-time event. Regulations get amended. Enforcement guidance clarifies what the law actually means in practice. Technical standards get updated. Courts issue rulings that change how rules are interpreted. Regulatory deadlines pass and new ones appear.

An AI agent manifest written in January 2026 may need to be updated by December 2026 because one of its frameworks changed. The question is whether you find out proactively — before a regulator does — or reactively.

Singapore's Agentic AI Framework: The Most Forward-Looking AI Governance Document in Force

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

Most AI governance frameworks were written with predictive AI in mind: a model that takes inputs and produces outputs, with humans reviewing outputs before acting. The Singapore framework published in January 2026 is different. MAS and IMDA wrote it specifically for agentic AI — autonomous systems that plan, take multi-step actions, and interact with external systems without step-by-step human oversight.

It's the most direct regulatory guidance for the type of AI agents that organizations are actually deploying in 2026. And its four concepts apply universally — not just in Singapore.

EU Cyber Resilience Act: What the December 2026 Deadline Means for AI Software Products

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The EU Cyber Resilience Act (CRA) entered into force in December 2024. Most obligations apply from December 2027, but certain reporting requirements (vulnerability and incident reporting to ENISA) apply from September 2026. Products with digital elements — including AI-embedded software — are in scope.

If you're selling software into the EU that includes AI components, the CRA applies to your product. This is separate from the EU AI Act: the CRA covers cybersecurity; the AI Act covers AI governance. Both apply simultaneously to AI software sold in the EU.

Multi-Framework AI Compliance: How Global Enterprises Handle 12+ Overlapping Regulations

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

A global financial services company operating in New York, London, Frankfurt, Sydney, and Singapore doesn't get to choose which regulations apply. They all apply simultaneously. SOX + GDPR + HIPAA + MiFID II + FCA + DORA + NIS2 + AML + PCI-DSS + ISO 27001 + NIST CSF + SOC 2.

The question isn't "which ones do we need to comply with." The question is "how do we build a single governance architecture that satisfies all of them without creating 12 separate compliance silos."

The answer is that most frameworks require the same underlying controls — they just describe them differently and attach different evidence requirements.

India DPDP, Australia Privacy Act, and UAE PDPL: AI Governance in Three Growing Markets

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The global privacy regulation landscape has expanded well beyond GDPR and CCPA. India, Australia, and the UAE have all enacted or significantly amended data protection laws in the past three years — each with distinct approaches, enforcement mechanisms, and AI-specific provisions.

For AI agents operating in these markets, understanding the differences matters: a manifest configured for GDPR compliance is not automatically compliant with India's DPDP Act.

COPPA and AI: Why Children's Data Is the Highest-Risk Category in US AI Deployments

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The Children's Online Privacy Protection Act has been consistently enforced by the FTC for 25 years. COPPA violations regularly result in the largest per-violation penalties in US privacy law: up to $51,744 per violation as of 2023. For AI systems that collect data from or target content to children under 13, there is no equivalent risk-adjusted situation anywhere else in US privacy regulation.

The FTC has made clear that the "general audience" defense — "we didn't know children were using our platform" — requires demonstrating concrete age-verification measures, not just a terms-of-service statement.

ISO 27001 and AI Agents: Why It's the Baseline for Every Deployment

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The UAPK qualification funnel has a single framework that triggers for every deployment, regardless of answers: ISO 27001. It's not a coincidence. ISO 27001 is the information security management baseline that every other framework assumes you have in place.

GDPR references ISO 27001 as a baseline security measure. The EU AI Act's technical standards bodies have referenced it. HIPAA's Security Rule was modeled on its structure. SOC 2's Trust Service Criteria map directly to ISO 27001 domains. If you're going to comply with any specialized framework, you need ISO 27001 as the foundation.