Domain Example: Autonomous Drone

A reconnaissance drone is about to fire on a target. AUTHREX stops it.

How a trust-proportional authority layer prevents a UAV from acting on GPS-spoofed, jammed, or adversarially corrupted sensor data, without aborting the entire mission.

Picture this.

A Group 3 reconnaissance drone is operating over contested terrain in a coalition theater. It has identified a target matching its ROE profile and its autonomy stack is ready to engage. This is exactly the kind of mission the U.S. DoD is fielding today under DoDD 3000.09.

In the last 15 seconds, three things have happened: (1) GPS signal quality has dropped sharply, suggesting jamming. (2) The onboard IMU says the drone is turning, but GPS says it's flying straight, a classic GPS spoofing signature. (3) The camera-based target identification confidence is 94%, but the radar return pattern doesn't match a vehicle of that type.

The autonomy software doesn't weigh these signals together. It sees a target. It's about to fire.

The failure path.

Today's autonomous systems face this situation with binary tools: either full autonomy or a kill switch. Neither is safe here.

Three failure modes, in plain English
  • Fires on a spoofed decoy. Russia and Iran have both demonstrated camera/radar spoofing against U.S. systems. Target ID confidence alone is not enough.
  • Fires on a friendly aircraft. GPS spoofing can feed false position data, causing the drone to believe it's in a hostile zone when it isn't. Friendly fire incidents of this type have already been attributed to sensor manipulation.
  • Aborts the entire mission on any sensor anomaly. The alternative to "fire anyway" today is "trip the kill switch and return to base." This is a binary choice. Adversaries exploit both: fly into safety zones to trigger aborts, or fly so fast they outrun human oversight.
The Force Field in Action
!GPS JAMMING !SPOOFED TARGET AUTHREX Authority Field Authority: A1 (Tracking Only) Sensor trust 0.34 · Adversarial probability 0.81 · Engagement blocked

The governed path.

AUTHREX sits between the autonomy software and the physical actuators. When something goes wrong, each layer does its job in milliseconds, without waiting for human review at every step, but also without letting the system take irreversible action on corrupted data.

SATA Sensor Trust Evaluation "Can we believe what the sensors say right now?"

Within 5 milliseconds, SATA fuses GPS, IMU, camera, and radar into a single trust score. It sees GPS and IMU disagreeing (classic spoofing indicator), it sees camera and radar disagreeing (possible decoy), and it drops the overall sensor trust from 0.95 to 0.34. Every downstream decision now operates on that lower trust.

ADARA Adversarial Lie Detector "Is someone actively feeding us bad data?"

ADARA looks at the GPS-IMU disagreement pattern and the timing of when it started. This is not random sensor drift; the signature matches a known GPS spoofing attack (the IMU can't be spoofed from outside the vehicle). ADARA raises its adversarial-probability score to 0.81.

HMAA Authority Speed Limiter "What is this drone allowed to do at this trust level?"

At trust 0.95 and adversarial probability low, HMAA would have authorized autonomous engagement (Authority Level A3). At trust 0.34 and adversarial probability 0.81, HMAA automatically drops to Authority Level A1: "track the target, transmit sensor data to command, do not engage." The drone is still operational, still useful, just no longer allowed to take the irreversible action.

FLAME Cooling-Off Period "Before any irreversible action, pause long enough for a human to intervene."

Even if authority were to recover, FLAME enforces a deliberation window before engagement. For a lethal action, the window is 7 seconds. That gives a human operator at command enough time to see the sensor anomaly flags and confirm or veto the engagement.

CARA Controlled Safing "If things get worse, here's how to get back safely."

If sensor trust collapses further (below 0.20) or the spoofing is confirmed, CARA takes over: weapon safing, return-to-launch, transmit full sensor history to command for post-mission analysis. Deterministic, no ambiguity, runs in hardware.

What happens instead.

What the operator sees: A notification that the drone identified a target but AUTHREX downgraded authority due to sensor inconsistency. The drone is still in the area, still tracking, still transmitting. The operator reviews the sensor flags: yes, the GPS/IMU disagreement is real. A friendly patrol was in that area five minutes ago. The drone would have fired on a friendly unit.

What the adversary sees: Their spoofing attack didn't work. They don't get the friendly-fire incident they were trying to provoke. The drone completes its mission under human oversight, with full sensor logs preserved for forensic analysis of the attack.

What doesn't happen: No friendly fire. No aborted mission. No binary kill-switch decision. The drone continues to be useful, under authority that matches what its sensors can actually be trusted to support.

For engineers and reviewers.

Every plain-English description above has a formal mathematical specification behind it. Patents, simulations, hardware BOMs, and code are all open.

Go deeper into the technical layer

The mathematics, the FPGA implementation, the formal verification proofs, and the experimental validation are all documented.

See other domain examples

AUTHREX is domain-agnostic. The same governance pipeline works across drones, vehicles, ships, and ground robots.