Information security has a heavy detection bias. The dominant tools of the trade are SIEMs, log aggregators, anomaly detectors, threat-hunting platforms, and incident-response playbooks. The assumption underneath all of them is that the event will happen, you will see it shortly after, and you will do something about it. For most of what classical infrastructure does, that assumption is reasonable. You can revoke a session. You can rotate a credential. You can disable a user. You can clean a compromised host.
Key concepts
AI systems break the assumption. When an AI agent commits a payment, the money is gone. When it exports a customer table to an external address, the data is out. When it switches a grid segment, the physical action has already propagated. When it writes to an EHR, the record is in the chart and consumed by downstream systems within seconds. Detection is a notification for these events, not a control.
Pre-execution security is the inversion: the control lives before the event, not after. Whether the event happens at all is the decision under review. That inversion is simple to describe and hard to build, because it requires every irreversible action path to be gated by an explicit decision. This post walks through the time-axis framing, the classes of action where the inversion is not optional, and what the prevention posture looks like in practice. See pre-execution authorization for the category page and runtime authorization for AI agents for the broader model.
The time axis
Put every security control in one of three buckets based on when it runs relative to the event:
- Pre-execution. The control runs before the event. Its output decides whether the event happens.
- At-execution. The control runs alongside the event. It can emit signals, log records, or in some cases block in-flight (a WAF can drop an HTTP request mid-flight).
- Post-execution. The control runs after the event has committed. It detects, investigates, and coordinates response.
Classical enterprise security has heavy investment across all three, with the center of mass at post-execution. That center of mass is correct for the classical model because the blast radius of most events is bounded and recoverable. A user's compromised session can be revoked. A ransomware hit can be contained. A misconfigured firewall can be rolled back.
AI systems reposition the center of mass, because the events they produce are disproportionately irreversible. A post-execution control on an irreversible event is, by construction, the wrong shape. You cannot revoke a payment by detecting it. You can try to recover from it, but recovery and prevention are different disciplines with different cost structures.
What "irreversible" means, concretely
Irreversibility is a property of the action, not of the system. It has three ingredients:
- External commitment. The action hands state to a system you do not fully control. Payment networks, partners, customers, regulators, the physical world.
- Time-asymmetric cost. The cost of the action committed is much higher than the cost of the action not committing. A payment committed in error costs at least the amount, plus reconciliation, plus relationship cost. A payment that did not commit costs a retry.
- Non-idempotent effect. Replaying the action produces additional cost, not the same cost.
Actions with all three are irreversible in the control-design sense. They include payments, data exports, EHR and records writes, grid operations, procurement orders, customer communications, regulatory filings, and infrastructure mutations.
The reason to enumerate them is that they are the set of actions for which detection is not a control. It is useful. It is necessary. It is not sufficient. For these actions, the control that matters is pre-execution.
Why AI changes the picture
Most traditional enterprise systems do not commit these actions automatically. A human is in the loop somewhere. The loop is the de facto pre-execution control: a person looks at the action, decides it is reasonable, clicks something.
AI agents remove the loop. The whole point of an agent is that it can complete multi-step tasks without a click for every step. That is desirable for efficiency. It is also the reason the old post-execution posture no longer works: the human was the pre-execution control, and the human is gone.
Pre-execution security is what replaces the human. It is not a replacement for the human's judgment; it is a replacement for the human's presence at the actuation boundary. The judgment is encoded in policy. The presence is encoded in the gate.
What pre-execution security looks like in practice
A well-designed pre-execution posture has five characteristics:
The decision is explicit
Every irreversible action produces a structured request that is evaluated before the action commits. The decision is one of PERMIT, DENY, SILENCE. Only PERMIT allows the action to proceed. There is no fallback path that dispatches the action in the absence of a decision.
The decision is conservative by default
When the gate cannot reach its rules, or when it times out, or when the surface is not known to policy, the decision is SILENCE. The action does not dispatch. The posture is "if I do not know whether this should happen, it does not happen." This is the fail-closed property; it is the entire game for pre-execution security.
See fail-closed AI systems for the category page on this property. A system that fails open on any axis - unknown surface, timeout, unreachable policy, malformed input - has re-opened the post-execution-only posture for the subset of requests where that axis is hit.
The decision is reproducible
Same inputs, same policy version, same outcome, every time. This property lets the decision be audited, lets it be replayed, and lets it be defended in front of a regulator or an incident review board. Non-reproducible decisions are not pre-execution security; they are pre-execution guesswork.
Evidence is produced, not discovered
Every decision produces a signed receipt bound to the policy version. The receipt is appended to an immutable log. The log is the source of truth for what happened, not application logs, not metrics, not post-hoc interpretation. Evidence is a designed output of the system, not something you reconstruct after an incident.
The coupling between decision and action is structural
The tool-call SDK in the agent runtime treats the decision as binding. Without PERMIT, no dispatch. No fallback. No retry with a different path. This is what makes the control real instead of advisory. Advisory controls in a pre-execution context are indistinguishable from no control; the agent can always find a path around them if the coupling is voluntary.
The cases where this matters most
Not every AI system needs pre-execution security for every action. The actions that need it are specific and identifiable in advance.
Money movement
Payments, transfers, reimbursements, disbursements, settlements. The cost of a wrongly committed transfer is at least the amount plus reconciliation plus relationship cost. Detection is insufficient by definition.
Data exfiltration paths
Exports, API calls that return customer tables, reports that include personal data, communications that embed regulated fields. Once the data is out, it is out. Detection tells you after the fact; that is not a control on the action.
Regulated records
EHR writes, clinical decision support commits, patient communications, financial filings, regulatory disclosures. Reversal requires a deliberate, documented correction - expensive by design - and the correction does not actually undo the read by downstream systems.
Infrastructure mutation
Configuration changes, permission grants, resource lifecycle operations, policy bundle updates in security-sensitive systems. Reverting a mutation is possible in principle; actually recovering from one is often a multi-person incident.
Physical control
Grid operations, industrial control, robotics actuation, autonomous vehicles. The physical world does not accept rollbacks. Detection is a necessary source of post-hoc analysis, but the control on the action itself has to be pre-execution.
The argument against "just monitor it"
There is a recurring design suggestion that starts with "if we can detect the bad action fast enough, we can contain it." For irreversible actions, this argument is structurally unsound regardless of detection speed. Even a millisecond of detection latency is a millisecond in which the payment committed, the data left, the grid switched. The speed is a consolation; it is not a fix.
The second version of the argument says "we can automate the response to the detection." If the response is going to commit an equal and opposite action, it has the same irreversibility properties as the original action and the same need for a pre-execution control on the response. You do not save anything; you just move the problem one hop downstream.
The third version says "we can add human approval between detection and response." That is a pre-execution control on the response, which is a reasonable structure, but it does not help with the original event. The original event already committed.
The honest conclusion is that post-execution controls are necessary - you always need a detect-and-respond pipeline - but for irreversible actions they are additive to, not a substitute for, pre-execution authorization.
What about speed?
A common concern is that pre-execution security slows everything down. Two observations flip the concern.
First, the latency is small in absolute terms. A local authorization gate returns decisions in low single-digit milliseconds. Even a remote gate returns in the tens of milliseconds. Compare that to the time the agent is already spending on model generation, retrieval, tool execution, and network round trips, and the marginal cost is negligible.
Second, the comparison is not "zero latency and no control" versus "small latency and a control." The comparison is "small pre-execution latency" versus "detection latency plus incident response time plus recovery cost." Post-execution is not free; it is deferred cost. Pre-execution prices the cost up front, on the fast path, as a few milliseconds per decision.
For the cases where this calculation gets interrogated the most - payments, autonomy, grid - the answer is the same every time. See ai-execution-governance for the latency budget analysis across surface classes.
Frequently asked questions
Does pre-execution security replace detection?
No. Detection remains valuable: for reversible actions, for low-stakes actions, for observability, for catching the small number of events that get through any control. The inversion is just about where the center of mass lives for irreversible AI actions. That center of mass should be pre-execution.
Can a model be trusted to make pre-execution decisions?
No, not as the decision-maker. A model can produce signals that flow into the decision, but the decision itself must be deterministic and reproducible. See deterministic authorization for AI agents for the full argument.
How does this interact with existing SOC workflows?
Pre-execution decisions produce receipts. The receipts become a new source of evidence that feeds the SOC. Outcome distributions by surface become new alerting signals. The SOC is augmented, not replaced. The difference is that the SOC now gets to see controls that fired rather than only incidents that committed.
Next step
For the category page see pre-execution authorization. For the deterministic property that underpins defensible pre-execution decisions see deterministic authorization. For the broader architectural picture see runtime authorization for AI agents.
Related architecture
Next step
Move the center of mass of AI security from detection to prevention on the actuation boundary.