The runtime authorization model for AI agents was developed with software agents in mind: planners that emit tool calls, tool calls that hit external APIs, APIs that move money or data. That world has a millisecond budget for decisions, an assumption of best-effort network availability, and a long tolerance for retries. Autonomous systems - robotics, autonomous vehicles, industrial control, mission systems, medical actuation - do not share those properties. The authorization model has to adapt without losing the discipline.
Key concepts
This post covers what changes for autonomous systems, what does not, and the deployment patterns that survive the additional constraints. It is a companion to runtime authorization for AI agents for the subset of systems that physically move, write, or cut power in the real world.
Three axes that change
Latency budget
A payment API with a 150ms round trip can absorb a 5ms authorization decision without noticing. A motor control loop that runs at 1 kHz cannot absorb a 5ms decision per actuation without destabilizing the loop. A vehicle planner running at 50 Hz has 20ms per tick, and the authorization decision is one piece of that tick, not all of it. Budgets are tight, and "low-latency" has a different meaning than in software-only systems.
The adaptation is not to make the authorization faster by making it sloppier - it is to move authorization to the layer that fits the budget. In a robotics stack, per-actuation decisions are often enforced by a local safety controller with a pre-compiled rule set, while per-mission or per-plan-step decisions go through a richer gate. The discipline is the same; the topology is layered.
Safety envelope
Software agents operate inside a world where every action is, in principle, an API call. There is a well-defined schema, a well-defined surface, a well-defined effect. Autonomous systems operate in a world where every action has a physical consequence that is bounded by a safety envelope - a set of states and motions the system must not leave, regardless of what the planner or operator asks for.
Safety envelopes are typically engineered separately from AI policy. A vehicle's safety envelope is defined by vehicle dynamics, sensor coverage, and operational design domain, not by a machine learning system. An industrial robot's envelope is defined by mechanical limits, tooling constraints, and the workspace geometry. The authorization system is responsible for making sure no action is permitted that would take the system out of envelope; it is not responsible for defining the envelope.
The practical rule: safety envelopes are a hard constraint that the authorization gate consults. They are not derived from the AI's plan and they are not overridable by the AI's reasoning. The envelope is an input to the policy; the policy is part of the decision; the decision is binding on the actuator.
Functional-safety framing
Software authorization can be a standard SRE concern. Autonomous authorization operates under functional-safety standards: ISO 26262 for road vehicles, IEC 61508 for process industries, ISO 13849 for machinery, DO-178C / DO-254 for airborne systems, IEC 62304 for medical device software, and others. The standards define integrity levels (ASIL, SIL, Class) and prescribe processes, verification, and independence properties for the controls that sit in safety-critical paths.
Those frameworks predate AI agents by decades and will outlast any particular AI technology stack. The authorization system for an autonomous deployment has to slot into that framework rather than replace it. That means the authorization discipline has to be:
- traceable to a requirement that is itself traceable to a hazard analysis
- verified against specified fault conditions, not just happy-path tests
- reviewed under the standard's process artifacts (safety case, HARA, FMEA)
- assigned an integrity level that matches the hazards it mitigates
The runtime authorization model fits this framework well, but it has to be presented in the framework's language. A vendor selling "runtime authorization" to an autonomy program without understanding this framing will not be taken seriously by the safety organization.
What does not change
The core discipline stays intact. Three-valued decisions (PERMIT, DENY, SILENCE), deterministic evaluation, policy versioning, and signed receipts are as relevant for autonomous systems as for software agents, often more so. The decisions happen at a different cadence and are enforced by different controllers, but the contract is the same.
Three-valued decisions
The difference between a decision that was DENY (explicit refusal) and one that was SILENCE (no policy permitted it) matters in safety analysis. A silent refusal - "nothing allowed this, so it did not happen" - is the same shape as a fail-closed mechanical interlock: absence of a permit signal is refusal. Safety engineers recognize this pattern; it is how brake interlocks, E-stops, and door locks have worked for a century.
Determinism
Autonomous systems are evaluated under functional safety and regulatory regimes that require reproducibility. "The policy's output at time T given inputs X" must be reconstructible at time T + N for any N in the retention window. That is exactly the determinism property that deterministic authorization for AI agents argues for in the software case, with even less latitude in the autonomous case.
Signed receipts
A signed receipt for an actuation decision is the shape of evidence safety organizations have always wanted but rarely had for software decisions. Cryptographic signatures on the decision, bound to the policy version and the request inputs, are stronger than log-based evidence that predominates in certification today. Over time, this is likely to become an expected artifact in safety cases for AI-assisted autonomous systems.
Deployment patterns that hold up
Four patterns recur across robotics, autonomy, and industrial control deployments. Each corresponds to a different integrity or latency tier.
Pattern 1: pre-compiled in-loop safety controller
The innermost control loop - joint motion, motor commands, physical actuation - runs on a real-time controller with a pre-compiled rule set. The rules come from a safety policy. They are verified at compile time against the safety envelope. Runtime evaluation is microseconds to single-digit milliseconds. There is no dynamic policy lookup at this tier.
Runtime authorization at this tier is about the pre-compilation process: the rules that go into the controller are the output of a policy compilation pipeline, signed and versioned, and the policy version is recorded at deployment time. Every actuation does not produce a runtime receipt; the receipt is the signed build of the controller firmware.
Pattern 2: plan-step gate
One level up from the actuation loop, the planner emits plan steps - "follow this trajectory," "pick up object at this location," "open this valve." Plan steps are evaluated by a richer gate that has access to more context (mission state, operator identity, sensor confidence, operational design domain). Latency at this tier is milliseconds to tens of milliseconds. Decisions are made per plan step, not per actuation, so the volume is manageable.
This tier uses a conventional runtime authorization gate, producing signed receipts per plan step. Receipts accumulate in the on-vehicle or on-robot receipt store and are offloaded to a central store on the next sync.
Pattern 3: mission-level authorization
Missions - the task the autonomous system has been assigned - are authorized once at assignment time. This is the tier where human-in-the-loop controls live: an operator dispatches a mission, and the mission includes the authorization to execute its plan steps within defined bounds. Mission authorization has the loosest latency budget and the richest context (operator identity, mission parameters, applicable regulations, environmental conditions).
Mission receipts are long-lived artifacts. They are the authorization chain that subsequent plan-step and actuation-level receipts hang off of. A plan step whose mission receipt was revoked is not dispatched, regardless of what the plan-step gate would say in isolation.
Pattern 4: offline safety case reconstruction
Periodically, and after incidents, the full chain is reconstructed for review. Mission receipt → plan-step receipts → actuation firmware version → signed policy bundles. The chain is self-auditing: every step refers to the step above it, and every reference is signed. This reconstruction is what a safety organization needs for certification evidence and for incident analysis.
Most of what functional-safety standards want, in the context of AI-assisted autonomy, is exactly this chain. The work is in building it so that it is complete and reproducible. The payoff is that the safety case for the AI component becomes legible in the same way mechanical safety cases have always been legible.
A concrete example
Consider an autonomous warehouse vehicle that picks and moves pallets. Its stack includes a perception system (AI), a planner (partially AI), a controller (non-AI), and a fleet coordinator (hybrid).
- The mission receipt authorizes the vehicle to move a specific pallet from location A to location B during a specific time window, within a specific operational design domain (lit aisles, dry surfaces, during the daytime shift). Signed by the fleet coordinator at assignment.
- The plan-step receipts authorize each leg of the trajectory: "traverse aisle 4," "turn at junction 4B," "lift pallet 77392," "lower pallet at bay 12." Each is signed by the on-vehicle gate after evaluating against the mission receipt, current sensor state, and local policy.
- The actuation policy is baked into the motor controller firmware, compiled from signed source, with the firmware version recorded in the build receipt.
- An incident - the vehicle stops short of a dropped pallet on the floor - is reconstructed from the receipt chain: the mission was valid, the plan step for the final traverse was issued, the perception subsystem identified an obstacle, the gate returned
SILENCEon the next plan step, the vehicle halted. No narrative reconstruction is needed; the receipts are the story.
This is what the mature state of autonomous runtime authorization looks like. Every decision is evidenced. Every evidence is chained. Every chain is auditable.
Relationship to existing safety engineering
Autonomous systems already have mature safety engineering. Hazard analysis (HARA, FMEA, FTA), safety cases, and integrity-level assignments are long-established. Runtime authorization does not replace any of this. It slots in as the control layer that enforces what the safety analysis specifies, with evidence that the safety case can cite.
The relationship is:
- Hazard analysis identifies the unsafe states the system must not reach.
- Safety requirements specify the control behaviors that prevent those states.
- Runtime authorization implements those controls at the plan-step and mission level.
- Signed receipts provide the evidence trail the safety case requires.
Teams that treat runtime authorization as a parallel system to the existing safety engineering are doing it wrong. Teams that treat it as an implementation choice for their existing safety requirements are doing it right.
Frequently asked questions
Does this replace our functional-safety work?
No. Runtime authorization sits inside the functional-safety framework, not alongside it. The framework still defines what is safe; runtime authorization is how you enforce it and produce evidence for it.
What about hard real-time loops?
Hard real-time loops use pre-compiled rule sets (Pattern 1). The dynamic gate lives at higher layers with larger latency budgets. Trying to put a dynamic gate inside a 1 kHz motor control loop is a category error.
How does this interact with regulatory regimes like the EU Machinery Regulation or the autonomy-specific frameworks?
Those regimes are evolving specifically to accommodate AI-driven systems. Most of them require evidence of control behavior and traceability. Runtime authorization with signed receipts fits the direction of travel well, but any specific compliance work still has to be done against the specific regime.
Is this relevant to drone and aerospace autonomy?
Yes, with the caveat that airborne systems operate under DO-178C and related standards that have their own strict process requirements. The patterns described here are compatible but need to be mapped to the standard's expectations for software verification.
Next step
For the foundational software-agent model see runtime authorization for AI agents. For industry context see autonomous & industrial and industries. For the broader execution-governance frame see AI execution governance.
Related architecture
Next step
Bring runtime authorization into your autonomy safety case at the mission, plan, and actuation tiers.