Domain Example: Autonomous Vehicle

A self-driving car approaches a stop sign with an adversarial sticker. AUTHREX catches it.

How a trust-proportional authority layer prevents an autonomous vehicle from running a stop sign it can't correctly recognize, and does so fast enough to meet ISO 26262 ASIL-D safety targets.

Picture this.

A Level 4 autonomous vehicle is approaching a four-way intersection at 35 mph. Its camera-based perception stack sees a rectangular sign with a pattern that, due to a carefully placed adversarial patch (the Berkeley sticker attack is well-documented), classifies as a speed-limit sign instead of a stop sign. Confidence: 89%.

The radar sees a sign, but can't classify shape. The LiDAR confirms a vertical pole and rectangular reflector at standard stop-sign dimensions. The map database says "stop sign at this intersection."

The perception system doesn't weight these together. It trusts its primary camera classifier. It's about to proceed through the intersection at 35 mph.

The failure path.

Today's autonomous systems face this situation with binary tools: either full autonomy or a kill switch. Neither is safe here.

Three failure modes, in plain English
  • Runs the stop sign. Adversarial patches on traffic signs have been demonstrated in academic literature and in the wild. A single compromised sign is enough to cause collision at speed.
  • Ignores HD map data. The map says "stop sign here," but perception overrides it. Current autonomy stacks do not have a principled framework for resolving this kind of sensor-vs-prior conflict.
  • Cannot meet ISO 26262 ASIL-D. Probabilistic confidence scores from deep learning models are not considered sufficient for ASIL-D certification because they don't bound failure probability. Tier 1 suppliers struggle with this today.
Sensor Consensus in Action
STOP !ADVERSARIAL PATCH CAM: speed limit? 89% LIDAR: stop-sign shape 97% RADAR: position match 100% HD MAP STOP HERE 3 of 4 sensors say STOP. AUTHREX acts on consensus. Safe braking · ASIL-D compliant audit trail logged

The governed path.

AUTHREX sits between the autonomy software and the physical actuators. When something goes wrong, each layer does its job in milliseconds, without waiting for human review at every step, but also without letting the system take irreversible action on corrupted data.

SATA Sensor Trust Evaluation "Do the sensors agree with each other?"

SATA compares camera classification (speed limit, 89%) against LiDAR geometry (stop-sign dimensions, 97%), radar position (sign at map-predicted location, 100%), and map database (stop sign at this intersection, 100%). Four out of four non-camera signals say stop sign. One says speed limit. The trust score for the camera classification drops to 0.22.

ADARA Adversarial Lie Detector "Does this look like an attack?"

ADARA analyzes the camera feature activations. A normal stop sign produces a specific pattern of internal activations. An adversarial patch produces an unusual pattern, even when classification confidence is high. ADARA flags the input as likely adversarial (probability 0.73).

HMAA Authority Speed Limiter "At this trust level, what's the car allowed to do?"

At high sensor trust and no adversarial flag, HMAA authorizes "interpret sign and act accordingly" (Authority Level A3). At trust 0.22 and adversarial probability 0.73, HMAA drops to A1: "assume most conservative interpretation among disagreeing sensors." Four sensors say stop, one says speed limit. Most conservative: stop.

CARA Controlled Braking "Come to a complete stop, safely."

CARA executes a deterministic braking profile: -3.2 m/s² deceleration (below ISO 2631 passenger discomfort threshold), hazard lights at full stop, log event to onboard data recorder for regulator review. Every step is formally verified: no unsafe state is reachable from this state machine.

What happens instead.

What the passenger sees: The car comes to a complete stop at the intersection, just like it should. A notification on the display: "Traffic sign anomaly detected. Stopped conservatively. Data logged for review."

What the manufacturer sees: A flagged event in the fleet telemetry showing an adversarial-patch attempt. The image is forwarded to the perception training team for model hardening. Every vehicle in the fleet gets the updated defense.

What the regulator sees: A governance trace showing sensor trust, authority level, and the deterministic decision path, all cryptographically signed. This is the kind of evidence ISO 26262 ASIL-D audits require but current probabilistic systems cannot provide.

What doesn't happen: No missed stop sign. No collision. No black-box decision that can't be explained in court.

For engineers and reviewers.

Every plain-English description above has a formal mathematical specification behind it. Patents, simulations, hardware BOMs, and code are all open.

Go deeper into the technical layer

The mathematics, the FPGA implementation, the formal verification proofs, and the experimental validation are all documented.

See other domain examples

AUTHREX is domain-agnostic. The same governance pipeline works across drones, vehicles, ships, and ground robots.