Skip to main content

22 posts tagged with "Audit Logging"

Tamper-evident, hash-chained records of AI actions

View All Tags

How to Read a Gateway Deny Response

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

The UAPK gateway returns structured responses for every evaluation. A DENY or ESCALATE response includes a reason code that tells you exactly which policy check failed and why. If you're building an integration and getting unexpected denies, this post is your reference.

Capability Tokens: How UAPK Scopes Agent Permissions per Session

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

The manifest defines what an AI agent is allowed to do over its entire deployed lifetime. That's too coarse for most real deployments. You want the agent to be able to read customer data when it's responding to a customer query — but not when it's running a batch analytics task. You want different agents deployed with the same manifest to have different effective permissions depending on what task they're executing.

Capability tokens solve this. They are signed credentials — issued per session or per task — that scope the agent's permissions to a subset of its manifest-defined capabilities, for a specific time window, with a maximum action count.

Building Your First UAPK Manifest: A Step-by-Step Guide

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

The fastest path from zero to a governed AI agent is: run the qualification funnel → get your framework list → configure a manifest → register it → make a call. This post walks through each step with real examples.

If you're impatient, the manifest for a simple US SaaS agent is at the bottom of this post. For everyone else, starting with the qualification funnel means you understand why each field is configured the way it is.

EU MDR, FDA SaMD, and 21 CFR Part 11: AI Agents in Medical Devices and Clinical Software

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

If your AI agent touches clinical decision-making, diagnostic recommendations, treatment planning, or patient risk scoring, it may be classified as a Software as a Medical Device (SaMD). SaMD classification triggers regulatory requirements that are separate from and stricter than HIPAA — you're now in the FDA's jurisdiction (US) or EU MDR/IVDR jurisdiction (EU), not just privacy law territory.

The distinction matters because SaMD regulations aren't primarily about privacy. They're about safety: ensuring that software used in medical decisions is clinically validated, properly labeled, manufactured under quality controls, and doesn't cause patient harm when it behaves unexpectedly.

ISO 27701: Privacy Information Management for AI Systems

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

ISO/IEC 27701:2019 extends ISO 27001 with a Privacy Information Management System (PIMS). It adds privacy-specific clauses and controls on top of the ISO 27001 management system, mapping to GDPR, CCPA, and other major privacy regulations.

For organizations already certified to ISO 27001, adding ISO 27701 extends the existing management system rather than building a new one. The incremental effort is roughly 30–50% of the original ISO 27001 implementation, depending on how mature your privacy practices already are.

For AI systems that process personal data, ISO 27701 is the most rigorous international framework for demonstrating privacy compliance. The EU Commission has indicated that ISO 27701 certification can support GDPR adequacy assessments and serve as evidence of compliance under GDPR Article 5.

NIST CSF 2.0 and AI Agents: Govern, Identify, Protect, Detect, Respond, Recover

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

NIST released Cybersecurity Framework 2.0 in February 2024. The major change from CSF 1.1: a new Govern function was added, making it a six-function framework (GV, ID, PR, DE, RS, RC). The Govern function addresses organizational context, risk management strategy, and cybersecurity supply chain — topics that were scattered across CSF 1.1 but are now first-class functions.

For AI agents, the new Govern function is the most directly relevant addition. It's where organizational accountability for AI systems lives.

NIST CSF is voluntary for most US organizations, but it functions as a de facto standard for:

  • Federal contractors and agencies (often required by contract or policy)
  • Critical infrastructure operators (energy, water, finance, healthcare)
  • Organizations seeking cyber insurance
  • Any company using NIST as a security baseline alongside FedRAMP or CMMC

SOC 2 Type II and AI Agents: What Auditors Actually Look For

· 6 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

SOC 2 Type II is the most requested security certification in US enterprise software procurement. If your SaaS product touches customer data and you're selling to mid-market or enterprise buyers, you'll eventually get asked for a SOC 2 Type II report. For AI-native products, auditors are increasingly asking about AI-specific controls — not just the usual infrastructure checklist.

The difference between SOC 2 Type I and Type II matters: Type I says your controls are designed correctly as of a point in time. Type II says those controls operated effectively over a period of time (typically 6–12 months). The audit period is everything. An AI agent that behaved correctly in January means nothing if it went rogue in July and you have no logs to show it didn't.

FINRA and the SEC: AI Compliance for Broker-Dealers and Investment Advisers

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

FINRA and the SEC have moved from observation to active expectation on AI. FINRA's 2024 AI in Financial Services report outlined specific examination focus areas. The SEC's 2024 guidance on AI use in investment advice created new conflicts of interest disclosure requirements. And FINRA Rule 3110's supervision requirement applies to AI systems used in client-facing functions as fully as it does to human representatives.

If you're a broker-dealer or investment adviser using AI agents for client communication, suitability analysis, order routing, or research, the regulatory expectations are clear and increasingly examined.

NIS2 and AI in Critical Infrastructure: Incident Reporting, Supply Chain Security, and Personal Liability

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

NIS2 (Network and Information Security Directive 2) became applicable across EU member states in October 2024. It significantly expands the scope of its predecessor: where NIS1 covered a relatively narrow set of critical infrastructure operators, NIS2 covers essential entities and important entities across 18 sectors including energy, transport, banking, financial market infrastructure, health, drinking water, digital infrastructure, ICT service management, public administration, and space.

If your organization operates in any of these sectors in the EU and uses AI agents, NIS2 requirements apply to those AI systems as part of your overall cybersecurity obligations.

Compliance Framework Monitoring: Keeping Your AI Agent Policy Current as Regulations Change

· 5 min read
David Sanker
Lawyer, Legal Knowledge Engineer & UAPK Inventor | Patent EP 25 000 056.9 | ORCID 0009-0004-9636-3910

Compliance is not a one-time event. Regulations get amended. Enforcement guidance clarifies what the law actually means in practice. Technical standards get updated. Courts issue rulings that change how rules are interpreted. Regulatory deadlines pass and new ones appear.

An AI agent manifest written in January 2026 may need to be updated by December 2026 because one of its frameworks changed. The question is whether you find out proactively — before a regulator does — or reactively.