Domain Example: Ground Robot

A ground robot enters a smoke-obscured urban environment. AUTHREX knows when to stop.

How a trust-proportional authority layer keeps an unmanned ground vehicle from acting blind in a degraded visual environment, while staying useful under degraded sensing.

Picture this.

A small unmanned ground vehicle (SUGV) is conducting a reconnaissance run in an urban environment. A building fire upwind has produced dense smoke that has reduced visibility. Its primary camera is partially obscured. Its LiDAR returns are disrupted by smoke particulates at close range. Its SATA sensor trust is dropping fast.

The SUGV's autonomy stack is still detecting objects (walls, doorways, moving figures), but it's making inferences from partial data. It has just detected what it classifies as a "civilian, not combatant" figure at 8m range, behind smoke. The mission ROE is to not engage civilians.

But the figure might be a combatant. The camera feature activations are unusual. The thermal signature is ambiguous through the smoke. The system is about to make an authorization decision based on degraded data.

The failure path.

Today's autonomous systems face this situation with binary tools: either full autonomy or a kill switch. Neither is safe here.

Three failure modes, in plain English
  • Acts on a misclassification. In degraded visual environments, perception systems routinely mislabel humans in various ways (civilian vs combatant, armed vs unarmed). Any action based on this misclassification can cause harm.
  • Blind-drives into obstacles. LiDAR in smoke is unreliable at short range. Systems that trust LiDAR blindly will navigate into walls, vehicles, or people obscured by the smoke.
  • Aborts a time-critical mission. The binary alternative is "stop and request help," which can take the SUGV out of a time-critical recon window. Today's systems don't have a principled middle ground.
Graded Authority Under Degraded Sensing
SUGV AUTHORITY: A2 Trust: 0.42 (degraded) ? FIGURE Clear sensing Smoke degraded A3 autonomy → A2 human-authorized (still mission-capable)

The governed path.

AUTHREX sits between the autonomy software and the physical actuators. When something goes wrong, each layer does its job in milliseconds, without waiting for human review at every step, but also without letting the system take irreversible action on corrupted data.

SATA Sensor Trust Evaluation "How clear is the sensing right now?"

SATA monitors visibility, LiDAR return density, and camera contrast continuously. As smoke reduces these, trust drops from 0.91 to 0.42 in under 2 seconds. The fall rate itself is a signal, steep drops often indicate environmental degradation (smoke, fog, sand) rather than attack.

ADARA Adversarial Lie Detector "Is this environmental, or deliberate?"

ADARA distinguishes degradation patterns. Gradual camera contrast loss across the entire field of view with matching thermal patterns is consistent with smoke. This is NOT flagged as an attack, just degraded sensing. Adversarial probability stays low (0.09). The point is precision: ADARA doesn't cry wolf on environmental conditions.

HMAA Authority Speed Limiter "What is the robot allowed to do at this trust level?"

At trust 0.42, HMAA downgrades authority from A3 (autonomous navigation and decision) to A2 (continue navigation, but human authorization required for any action toward an identified human figure). Classification remains, but action on classification requires a human operator in the loop.

CARA Controlled Pause or Egress "If trust collapses further, stop safely."

If trust drops below 0.20 (full obscuration), CARA executes: stop, hold position, transmit last-known sensor data to operator, maintain thermal/acoustic awareness passively. The SUGV becomes a stationary sensor node rather than a mobile risk. When conditions improve, authority can be restored.

What happens instead.

What the operator sees: A notification: "Sensor trust degraded due to smoke. Robot operating at reduced authority. Human authorization required to act on detected human figure." The operator reviews the partial sensor data and decides: request clarification, approach carefully, or withdraw. The decision is informed, not pressured.

What the mission gets: A robot still collecting sensor data, still positioned forward, still useful, but not making irreversible decisions on degraded inputs. The mission continues at a lower autonomy tier until conditions improve.

What doesn't happen: No misclassification acted upon. No blind navigation into obscured obstacles. No forced mission abort. No black-box decision that can't be explained to a commander or an investigator afterward.

For engineers and reviewers.

Every plain-English description above has a formal mathematical specification behind it. Patents, simulations, hardware BOMs, and code are all open.

Go deeper into the technical layer

The mathematics, the FPGA implementation, the formal verification proofs, and the experimental validation are all documented.

See other domain examples

AUTHREX is domain-agnostic. The same governance pipeline works across drones, vehicles, ships, and ground robots.