Skip to main content

24 posts tagged with "Audit Logging"

Tamper-evident, hash-chained records of AI actions

View All Tags

EU Cyber Resilience Act: What the December 2026 Deadline Means for AI Software Products

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The EU Cyber Resilience Act (CRA) entered into force in December 2024. Most obligations apply from December 2027, but certain reporting requirements (vulnerability and incident reporting to ENISA) apply from September 2026. Products with digital elements — including AI-embedded software — are in scope.

If you're selling software into the EU that includes AI components, the CRA applies to your product. This is separate from the EU AI Act: the CRA covers cybersecurity; the AI Act covers AI governance. Both apply simultaneously to AI software sold in the EU.

Multi-Framework AI Compliance: How Global Enterprises Handle 12+ Overlapping Regulations

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

A global financial services company operating in New York, London, Frankfurt, Sydney, and Singapore doesn't get to choose which regulations apply. They all apply simultaneously. SOX + GDPR + HIPAA + MiFID II + FCA + DORA + NIS2 + AML + PCI-DSS + ISO 27001 + NIST CSF + SOC 2.

The question isn't "which ones do we need to comply with." The question is "how do we build a single governance architecture that satisfies all of them without creating 12 separate compliance silos."

The answer is that most frameworks require the same underlying controls — they just describe them differently and attach different evidence requirements.

ISO 27001 and AI Agents: Why It's the Baseline for Every Deployment

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

The UAPK qualification funnel has a single framework that triggers for every deployment, regardless of answers: ISO 27001. It's not a coincidence. ISO 27001 is the information security management baseline that every other framework assumes you have in place.

GDPR references ISO 27001 as a baseline security measure. The EU AI Act's technical standards bodies have referenced it. HIPAA's Security Rule was modeled on its structure. SOC 2's Trust Service Criteria map directly to ISO 27001 domains. If you're going to comply with any specialized framework, you need ISO 27001 as the foundation.

SFDR, CSRD, and AI: How ESG Reporting Requirements Govern AI Agents in Sustainable Finance

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

ESG investing has generated its own regulatory stack: SFDR (Sustainable Finance Disclosure Regulation) requires fund managers to classify products under Article 6, 8, or 9 and disclose how sustainability factors are integrated. CSRD (Corporate Sustainability Reporting Directive) requires large EU companies to report on sustainability using ESRS (European Sustainability Reporting Standards).

Both regulations increasingly involve AI: ESG scoring models, portfolio screening algorithms, automated ESRS data collection, and natural language processing of sustainability disclosures. Where AI is involved, the governance and audit requirements of these regulations apply to the AI layer.

FedRAMP and AI Agents: What Federal Cloud Authorization Means for Your AI Stack

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

FedRAMP (Federal Risk and Authorization Management Program) Rev. 5 — aligned with NIST SP 800-53 Rev. 5 — is the authorization framework for cloud services used by US federal agencies. If your AI platform is used by a federal agency, or if you're building AI agents that operate on FedRAMP-authorized infrastructure, you're in this regulatory environment.

The 2024 FedRAMP authorization process reform has made the path somewhat faster for some providers. But the substantive requirements — particularly around logging, access control, and incident reporting — are unchanged and extensive.

PCI-DSS 4.0 and AI Payment Agents: Protecting Cardholder Data in Automated Pipelines

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

PCI-DSS 4.0 became the mandatory standard on March 31, 2024. Version 3.2.1 is retired. Among the significant changes in v4.0: expanded requirements for automated and AI-driven systems operating within or adjacent to the Cardholder Data Environment (CDE).

If your AI agent handles, routes, processes, or queries payment card data — primary account numbers (PANs), CVVs, cardholder names, expiration dates — PCI-DSS 4.0 applies to both the agent and its infrastructure.

DORA and AI Agents: ICT Risk Management for EU Financial Entities

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

DORA — the Digital Operational Resilience Act — became applicable on January 17, 2025. It applies to EU financial entities (banks, investment firms, insurance companies, payment institutions, crypto-asset service providers) and their critical ICT third-party service providers.

If you're an AI vendor providing services to EU financial institutions, or an EU financial institution running your own AI agents, DORA's ICT risk management framework applies to those AI systems.

CMMC 2.0 and DoD AI Agents: Protecting CUI Without Slowing Down Operations

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

CMMC 2.0 is no longer proposed — it's in the Federal Register and is being phased into DoD contracts through 2026. If you're a defense contractor that uses AI agents to handle Controlled Unclassified Information (CUI), you need CMMC compliance baked into those agents.

The consequence of getting this wrong isn't a fine. It's losing your DoD contracts.

SOX and AI Financial Reporting: What Sections 302, 404, and 906 Mean for Autonomous Agents

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

SOX Section 302 requires the CEO and CFO to personally certify that financial reports are accurate and that they've reviewed the controls over financial reporting. Section 906 makes false certifications a criminal offense — up to 20 years in prison.

When an AI agent is generating financial reports, running disclosure checks, or preparing SEC filings, those certifications still apply. The executives signing them need to be able to vouch for the process that produced the numbers.

That's only possible if the AI's actions are auditable, the outputs are traceable to specific data sources, and a human reviewed the result before it was filed.

MiFID II and Algorithmic Trading AI: Best Execution, Kill Switches, and the Algo Register

· 4 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor

MiFID II Article 17 was written specifically for algorithmic trading. It predates large language models, but its requirements translate directly to AI trading agents: you need a kill switch, an algo register, annual conformity testing, and an audit trail that covers every order generated by the algorithm.

The FCA's equivalent rules in the UK (post-Brexit) mirror MiFID II Article 17 almost exactly. If you operate in both jurisdictions, you're dealing with two regulators but essentially the same requirements.