Kernel FSM, in-process decision (no HTTP, no crypto)
trigguard.sdk.gate.check() · trigguard-system/trigguard-kernel
| Variant | Iterations | p50 | p95 | p99 | p99.9 | max | Throughput |
|---|---|---|---|---|---|---|---|
| Default request | 100,000 | 0.011 ms | 0.013 ms | 0.018 ms | 0.049 ms | 0.214 ms | 85,343 / s |
| Tier-1 irreversible + 2 signals (spend) | 100,000 | 0.007 ms | 0.008 ms | 0.011 ms | 0.038 ms | 0.141 ms | 139,716 / s |
Single-threaded, after 1,000-iteration warmup. The DecisionEngine path is exercised (telemetry confirms 101,000 / 101,000 gates evaluated).
Decision core over HTTP (Swift canonical core)
POST /decide · TrigGuard/remote-eval-core
| Variant | Iterations | p50 | p95 | p99 | p99.9 | max | Throughput |
|---|---|---|---|---|---|---|---|
| Client wall-clock (hrtime) | 20,000 | 0.168 ms | 0.260 ms | 0.474 ms | 1.328 ms | 8.710 ms | 5,285 / s |
Isolates the Swift FSM + HTTP framing; no Node evaluator in the hot path.
End-to-end evaluator pipeline
POST /v1/evaluate · Node evaluator → Swift canonical core → signed receipt
| Scenario | Iterations | p50 | p95 | p99 | p99.9 | max | Throughput |
|---|---|---|---|---|---|---|---|
| Sequential (1 client, no concurrency) | 5,000 | 0.485 ms | 0.786 ms | 1.279 ms | 1.973 ms | 3.460 ms | 1,868 / s |
Full production path: canonicalization, idempotency, auth check, protocol fingerprint, receipt hashing.
Concurrent load test (k6)
VU-based sustained concurrency · error rate = 0%
| Virtual users | Duration | Total requests | RPS | p50 | p95 | p99 | p99.9 | Errors |
|---|---|---|---|---|---|---|---|---|
| 10 VUs | 30 s | 120,734 | 4,024 / s | 2.26 ms | 3.60 ms | 4.59 ms | 13.63 ms | 0.00% |
| 50 VUs | 60 s | 249,407 | 4,156 / s | 11.65 ms | 14.78 ms | 21.33 ms | 28.77 ms | 0.00% |
| 100 VUs | 30 s | 127,889 | 4,261 / s | 22.71 ms | 28.63 ms | 35.12 ms | 42.46 ms | 0.00% |
Single-instance Node evaluator + Swift core on one machine; saturation around ~4,200 RPS. Production deployment uses horizontal autoscaling to keep per-instance concurrency well below saturation.
Site-claim reconciliation
How measured reality compares with the published headline numbers
Reproduce these runs
Every number on this page comes from a committed artifact; no hand-edited values.
The full harness lives in the TrigGuard repository. After cloning, on any machine with Node 20+, Python 3.11+, Swift 5.9+ and k6 installed:
# 1. In-process kernel FSM
cd trigguard-system/trigguard-kernel
PYTHONPATH=. python3 ../../TrigGuard/scripts/bench/kernel_latency_bench.py \
--count 100000 --warmup 1000 \
--output ../../TrigGuard/evidence/artifacts/benchmarks/kernel_fsm_latency.json
# 2. Start Swift canonical decision core
cd ../../TrigGuard
TG_DECIDE_PORT=9090 ./scripts/start_canonical_core.sh &
# 3. Measure /decide directly
BASE_URL=http://127.0.0.1:9090 ITERATIONS=20000 WARMUP=1000 \
OUTPUT=evidence/artifacts/benchmarks/decision_core_http_latency.json \
node scripts/bench/pipeline_latency_bench.js # (endpoint switched to /decide)
# 4. Start Node evaluator stub with canonical core URL wired in
cd remote-eval-stub && NODE_ENV=test \
TRIGGUARD_UNSAFE_LOCAL_AUTH_BYPASS=true PORT=8080 \
TG_CANONICAL_CORE_URL=http://127.0.0.1:9090/decide \
BULKHEAD_TENANT_MAX_IN_FLIGHT=2000 BULKHEAD_GLOBAL_MAX_IN_FLIGHT=5000 \
RATE_LIMIT_PER_TENANT=10000000 RATE_LIMIT_GLOBAL=50000000 \
node server.js &
# 5. End-to-end sequential pipeline
cd .. && BASE_URL=http://127.0.0.1:8080 TOKEN=test-token \
ITERATIONS=5000 WARMUP=500 \
OUTPUT=evidence/artifacts/benchmarks/eval_pipeline_latency.json \
node scripts/bench/pipeline_latency_bench.js
# 6. Concurrent k6 load test
k6 run --env BASE_URL=http://127.0.0.1:8080 --env VUS=50 --env DURATION=60s \
--env TOKEN=test-token \
--summary-export evidence/artifacts/benchmarks/k6_summary_50vu_60s.json \
load-tests/evaluate_bench.js
# 7. Aggregate
node scripts/aggregate_benchmarks.js
Raw artifacts
- benchmark_summary.json, aggregated headline view
- benchmark_dataset.json, machine-readable dataset (mirrors the report below)
- kernel_fsm_latency.json, in-process kernel, default request
- kernel_fsm_latency_signals.json, in-process kernel, signals-heavy
- decision_core_http_latency.json, Swift /decide direct
- eval_pipeline_latency.json, end-to-end pipeline sequential
- k6_summary_10vu_30s.json, 10 VU load test raw k6 summary
- k6_summary_50vu_60s.json, 50 VU load test raw k6 summary
- k6_summary_100vu_30s.json, 100 VU load test raw k6 summary
Documentation
- TRIGGUARD_BENCHMARK_REPORT.md, full report (methodology, layered envelope, reconciliation, reproduction)
- performance_claim_audit.md, file-by-file audit of every public latency claim against the artifacts
