AI Agents Are Running in Your Business. Here Is What Governing Them Actually Looks Like. 

Most organizations deploying AI agents have thought carefully about what those agents are supposed to do. Fewer have thought carefully about what those agents are capable of doing. That gap is where governance risk lives. 

A working paper released in April 2026 by researchers affiliated with CEN/CENELEC JTC 21 put a specific conclusion on record: your regulatory obligations are determined not by what is inside an agent, but by what it does in deployment. An AI agent that summarizes internal meeting notes triggers a narrow set of transparency obligations. The same agent, given access to a hiring system, activates a completely different tier of EU AI Act requirements. The difference is not the agent’s architecture. It is the agent’s footprint. 

ISO 42001, the international standard for AI management systems, provides the right organizational framework for governing that footprint. The six disciplines below are where that framework meets practical business operation. 

Six governance disciplines for agentic AI 

These six disciplines are not aspirational. They are the minimum operational posture for an organization that is deploying AI agents and intends to govern them responsibly. ISO 42001 provides the management system framework that holds them together in an auditable, certifiable structure. 

Discipline 1: Know your agent’s footprint, not just its function 

Every AI agent has two profiles. The first is what it was designed to do. The second is what it can actually do: every system it can access, every action it can take, every person affected by those actions. Governing an agent means knowing both profiles and confirming they match. 

This is not a technical exercise. It is an accountability exercise. The same discipline you apply to documenting a vendor relationship or a new employee’s system access rights applies here. Before an agent is deployed, your organization should be able to produce a clear inventory: what external systems does it connect to, what can it read, what can it write, what can it send, and who does that affect? 

Business analogy: You would not onboard a new employee and give them a master key to every system in the building because their job description did not explicitly forbid it. An AI agent’s access should be documented with the same deliberateness you apply to employee onboarding. 

Governance question: Can your organization produce a complete access inventory for every deployed agent today? If not, that is your starting point. 

Discipline 2: Build fences, not rules 

There is a critical difference between telling an AI agent not to do something and technically preventing it from doing that thing. Instructions can be overridden, misinterpreted, or circumvented by an unusual input. Technical constraints cannot. 

For any action your agent is not authorized to take, it should lack the technical ability to take it, not merely the instruction. A customer service agent that is not authorized to issue refunds above a certain threshold should have that limit enforced by the system it connects to. A recruiting agent that is not authorized to reject applications should not have access to the rejection function at all. 

Business analogy: A rule telling an employee not to access the payroll system means very little if their computer has the login credentials. Removing the credentials is a different category of control entirely. 

Governance question: For every action your agents are not authorized to take, is that enforced by a technical constraint or an instruction? The answer determines your actual risk exposure. 

Discipline 3: Treat agent updates like product launches, not software patches 

AI agents change, new tools get added, new data sources get connected, and the underlying model gets updated. Each of these changes can alter the agent’s regulatory profile, its risk tier, and the controls required to manage it responsibly. Without a deliberate process for classifying those changes, capability growth accumulates without oversight. 

The governance discipline here is a pre-agreed classification system. Some changes are minor, like a wording update that does not affect what the agent can do. Some changes are material, like adding a new external system the agent can act on, or connecting to a new data source it did not previously access. Material changes require fresh review before deployment. The business value is the ability to demonstrate, at any audit or enforcement inquiry, that governance kept pace with deployment. 

Business analogy: When a software team updates a customer-facing application, it goes through testing and sign-off before it is released. An agent update that expands what that agent can do deserves the same discipline as any other change that affects customers or business processes. 

Governance question: Who in your organization decides whether an agent update requires a governance review? If there is no clear answer, that is a gap. 

Discipline 4: Give your agents a performance review 

Every employee is measured against expected performance. An AI agent should be no different. The question is not whether to monitor agents. It is whether your organization has defined what normal looks like, so departures from it are visible. 

This starts with a baseline. How often does the agent act? What kinds of actions does it typically take? What proportions involve external communications, data reads, or consequential outputs? When does a pattern shift enough that a human should review it? Organizations that operate agents without baselines have no mechanism for detecting behavioral drift, which is the condition the EU AI Act’s essential requirements are designed to prevent. You do not need sophisticated tooling to start. You need a decision about what you are going to measure and what threshold warrants human attention. 

Business analogy: A financial controller reviewing monthly expenditures is not looking for fraud on every line. They are looking for patterns that deviate from the expected range. Agent monitoring works on the same principle. Normal must be defined before abnormal can be recognized. 

Governance question: If one of your agents started behaving differently today, who would notice, and how quickly? 

Discipline 5: Have a response plan before you need one 

When an agent’s behavior crosses a defined threshold, what happens? Who has authority to suspend it? Who reviews what it did? What is the process for determining whether the behavior was a one-time event or a systemic change? What does re-approval look like before the agent returns to operation? 

Organizations that work through these questions in advance are applying the same operational discipline that exists for every other business continuity scenario. The response plan exists for the same reason a financial escalation policy exists. Not because the scenario is expected, but because the moment you need it is not the moment you want to be designing it. The EU AI Act requires corrective action procedures for high-risk AI systems. The more important outcome, though, is organizational readiness. 

Business analogy: A fire evacuation plan is not evidence of pessimism about fire risk. It is evidence of operational maturity. An AI agent response plan sits in the same category of governance infrastructure. 

Governance question: If an agent produced an output tomorrow that caused customer harm, could your organization reconstruct what it did and why? If not, your response capability is not yet ready. 

Discipline 6: Know what version of your agent is running 

At any given moment, can your organization say with confidence what capabilities your deployed agents have, what data they can access, and what guardrails are in place? Most organizations can answer this for their core software systems. Fewer can answer it for their agents, particularly as those agents evolve through updates and capability additions. 

The governance discipline here is version accountability. When the agent changes, the change is recorded and the current version is traceable. This is not a technical formality. It is the foundation of any audit response. If a regulator, a customer, or a board member asks what a specific agent was capable of doing on a specific date, the answer needs to be retrievable. Organizations that cannot produce that answer are carrying exposure that documentation would close at low cost. 

Business analogy: A manufacturing company can tell you exactly what specifications any product on the floor was built to. A financial firm can tell you what trading rules were active on any given date. AI governance requires the same baseline accountability for your agents. 

Governance question: Can your organization demonstrate, for any deployed agent, what it was capable of at any point in the past six months? 

What good looks like at each stage 

Governance maturity in this area develops in stages. Few organizations arrive at all six disciplines simultaneously. The practical question is where you are starting from and what the next step looks like. 

Stage 1: Aware. You can name your deployed agents and describe their general function. The next step is to document the footprint inventory for each agent: external systems, data access, and affected persons. 

Stage 2: Documented. Access inventories and a change classification policy are in place. The next step is to define behavioral baselines and thresholds that trigger human review when crossed. 

Stage 3: Monitored. Baselines are active and threshold breaches are routed to a human reviewer. The next step is to build and test the response plan and establish version accountability for every deployed agent. 

Stage 4: Certifiable. All six disciplines are operating and documented within an ISO 42001 AIMS. The organization can demonstrate governance posture to a regulator, auditor, or customer at any point. 

The case against waiting 

The most common reason organizations delay agentic AI governance work is that the formal standards are not yet finalized. The EU AI Act harmonized standards are still in development. That fact is accurate, but the conclusion drawn from it is wrong. 

The EU AI Act’s requirements for high-risk AI systems will be enforceable by December 2027. Standards provide a path to demonstrating compliance. They do not create obligations. Every month of delay is a month of compliance debt accumulating on a timeline that has already started. 

The governance disciplines described here do not require finalized standards. They require decisions, documentation, and organizational commitment. All three are available today. 

The ISO 42001 connection 

Each of the six disciplines above maps to a specific clause in ISO 42001. The footprint inventory lives in the scope statement under Clause 4. Change classification lives in the operational controls under Clause 8. Behavioral monitoring lives in performance evaluation under Clause 9. Your balancing feedback lives in corrective action under Clause 10. 

ISO 42001 is not a constraint on agentic AI deployment. It is the management system that makes deployment defensible. Organizations already certified against ISO 42001 have the structural foundation in place. What most need is a deliberate extension of that foundation to cover the specific characteristics of agentic systems: their runtime behavior, their dynamic capability footprints, and their multi-system action chains. 

Organizations that have not yet begun ISO 42001 implementation have an opportunity to build that foundation with agentic AI governance built in from the start, rather than retrofitted after the fact. 

Where does your organization stand? 

Agentic AI governance is not a future problem. It is a current one. The organizations building that foundation now will be the ones that can demonstrate it when asked by a regulator, an auditor, or a customer. A-LIGN works with organizations at every governance maturity stage, from initial readiness to full ISO 42001 certification assessment. Reach out today to find out where your organization stands.