03:47:03 UTC Β· Thermal anomaly Β· Grid 4C Β· Altadena, California
It's 4 AM. A fire starts
in the foothills.
Asha has 90 seconds.
The thermal sensors detected it instantly. The nearest residential grid is under 800 metres from the forest edge. The standard dispatch process takes 48 minutes on a good night.
The hardware to do better already exists. The interface doesn't.
This is my response to a design challenge from FlytBase β an enterprise platform for autonomous drone operations β with a specific brief: design the operator interface that makes 90-second emergency response real from the standard 48 minutes response.
Project
FlytBase Sentinel
Origin
Design Fusion S3 (Designathon)
Role
End-to-end Product Design

01 β Problem
The hardware was fast. The interface wasn't.
FlytBase builds autonomous drone infrastructure for public safety. Their drones can launch without a
pilot, stream live thermal footage, and coordinate across a whole fleet without any human input. In a
wildfire scenario, that hardware can get a drone airborne and feeding live thermal data within 30
seconds of detection. Getting from sensor alert to coordinated response in 90 seconds was already
within reach β technically.
Their Design Fusion S3 challenge gave me a specific brief: design the command center interface that
makes those 90 seconds real. Not a concept. Not a dashboard. A structured operator experience
sitting between the autonomous hardware and the human who has to act on it β in the dark, at 3 AM,
while managing multiple zones at once.
The 48-minute window β where the time actually goes
3:47 AM
Sensor detects
anomaly
0 min
3:52 AM
Alert reaches
monitoring center
+5 min
3:58 AM
Operator reviews 3
separate systems
+11 min
4:05 AM
Decision: "Send
ground crew"
+18 min
4:18 AM
Crew confirms active
fire
+31 min
4:35 AM
Coordinated
response begins
+48 min
Sentinel target
90 seconds
Look at the timeline above. The sensor fires instantly. The drone can be airborne in under a minute.
But between the alert arriving and a decision being made, there's an 11-minute gap: the operator
cross-referencing three disconnected systems by hand. That 11 minutes is the interface failure. It's the
work Sentinel exists to absorb.
The cognitive load is the bottleneck. The data exists. The tools exist. The operator's brain β doing
assembly work that the interface should be doing β is what keeps 48 minutes possible when 90
seconds is not.
Sentinel is a data preparation engine and the interface layer between FlytBase's autonomous hardware
and the operator. Instead of the operator assembling a picture from disconnected sensor logs, Sentinel runs
silently from the first sensor reading: fusing thermal, satellite, and ground sensor data; scoring ignition
probability against historical burn patterns. By the time it interrupts the operator, the corroboration is
complete, the confidence is scored, and as decisions are taken phase per phase, AI creates plan further.
Their job becomes three explicit decisions. Everything else is handled.
03 β What I Got Wrong First
My first version solved the wrong problem.
Finding 2 sent me in the wrong direction. More information at the wrong moment makes decisions slower, so the obvious answer seemed to be compressing the interface as much as possible. That thinking produced the Escalation Slider.

Escalation Slider concept wireframes
(AI assisted)
Drag from Assess β Contain β Rescue.
Each position fires a predefined AI macro-script. Clean. Fast.
A slider doesn't record a conscious decision. It records a gesture.
In this early version, dragging the slider directly triggered actions like drone deployment and containment.
There was no step where the operator reviewed or confirmed what the system was about to do.
The system logged: βOperator moved slider at 03:48β
Thereβs no decision. Thereβs no confirmation, no review, no ownership.
Rejected
Escalation Slider β
Single gesture triggered multiple critical actions
No visibility into system behavior before execution
No explicit approval from the operator
Chosen
Intent-Based Authorization Action barβ
AI proposes a specific mapped plan; Actions structured into phases (Scan β Verify β Contain β Rescue)
Operator explicitly confirms each action
Why
Critical systems require deliberate decisions, not shortcuts
The operator must see, understand, and approve before action
Accountability should be tied to clear decisions, not gestures

What i designed
"I stopped trying to automate the operator's job. I started designing a system to
assist their judgment."
From that point, the question became: what exactly does the operator need to do, and what can the AI
carry for them? I refer to the operator as Asha throughout this case study. She's a night-shift command center operator managing multiple zones simultaneously.
Here is where I drew that line:
WHat AI handles autonomously
Sensor fusion + cross-referencing
Ignition probability classification
Drone routing + dispatch
Terrain + fire spread modeling
Evacuation route generation
Passive audit trail, timestamped
what operator authorizes
Dispatch drone β or β alert ground team
Confirm active incident or monitor-only
Authorize containment plan
Activate drone evacuation guidance
Override any AI proposal
Handle exceptions the system surfaces
02 β Research
What I read β and the four things that actually changed the design.
I did not interview real wildfire dispatchers β this was desk research on emergency dispatch. I read about emergency dispatch workflows, how operators behave under time pressure, and how trust works in human-AI systems for critical decisions. And four findings from thatresearch directly shaped every design decision that followed.
01
Operators build the picture manually. Today's
dispatch tools don't talk to each other. The
operator pulls from separate sensor dashboards,
satellite feeds, and weather monitors. The mental
model is theirs to assemble alone.
02
More information at the wrong moment makes
things worse.
Under stress, people fixate on one
data point. Flooding the screen with more data at
peak urgency slows decisions down β it doesn't
speed them up.
03
Operators approve plans β they don't pilot
assets.
The mental model of "operator as pilot"
breaks down at scale. What they're actually doing
is deciding whether a proposed plan is right β not
controlling individual drones.
04
Every decision needs to be traceable. In a critical
system, you can't have ambiguity about whether a
human or AI made a call. Legal accountability and
operational trust both depend on a clear,
permanent record.
These four findings directly map to the six design decisions in Section 04 and the AI accountability system in Section 06.
04 β Design Decisions
Six decisions that shaped the final system.

Fig 07
The same panel position, three completely different information contracts. Rescue state: zero prose,
drone status strips, structure clearance count only.
1
Alerts navigated to a new screen β alerts float over the current one
Jumping to a new screen every alert wiped out the spatial context Asha was already working with.
Before
Alert fires β full UI switches to a new screen.
Whatever the operator was working in the previous incident disappears.
After
Alert arrives as a modal over the current map.
Operatorβs spatial context stays. They decides without losing there bearings.
Drawing the before and after side by side was the moment the modal became the only option. Operators maybe already attending a situation. Before the alert fires: Asha is watching Grid 4C, three active zones in view, drone positions live. After the first alert fires: Full screen takeover on every alert forces a full context reset, everything she was tracking gone. The alert arrives at precisely the moment spatial context matters most. She needs to know where this incident sits relative to the zones she has been watching for the past 40 minutes. A screen switch destroys that information at the exact second she needs it. The modal keeps her map alive at the moment she has to act on it.


Notification
Alert with
confidence %
and reasoning
Alert
ALERT Shifted
onto right panel until data is ready
2
Confidence percentage β showing the reasoning behind it
A score alone gives the operator nothing to push back on. Showing the inputs does.
Before
Short replay clip to build confidence. Borrowed
from consumer products. Doesn't reflect
sensor-based detection.
After
AI classification: ignition probability + contributing factors (prior burns, wind vector, thermal delta above baseline). Reasoning surfaced, not hidden.
I ran a paper test. AI returns "87% ignition risk." I sat with that number and asked: what does Asha do with 87%? She either trusts it or she does not. If she does not, a score gives her nothing to push back on. She would have to override blind, without any reasoning to stand on. Then I thought about when operators actually override AI recommendations: when they know something the system does not. A confidence percentage cannot be interrogated. The contributing inputs (prior burn history for this grid, wind NE 18mph, thermal delta above seasonal baseline) are something Asha might genuinely recognise. She might know that Grid 4C had a controlled burn permit filed last week. The score tells her how certain TALON is. The inputs give her a basis for disagreement, and a basis for disagreement is the only thing that keeps the human in the loop meaningfully.
3
Same density across every phase β information scales with urgency
The same interface renders less as stakes rise. Operator reads less because the situation demands more focus,
not less information.
Scan
Build awareness
Full zone context visible
Drone cards β all detail
6 log entries visible
AI reasoning shown in detail
Contain
Decide fast
Headlines + status dots
Key metrics highlighted
Only critical logs visible
Clear actions surfaced
Verify
Cross-reference
Focus on risk + spread window
Drone feed dominant
Reduced logs
AI highlights supporting evidence
Rescue
Supervise only
Structures cleared β
number
Single-line drone strips
Latest status only
No AI reasoning β just status
I counted the information elements in the wireframe at each phase. Scan: 28 elements. Verify: 31. Contain: 34. Rescue: 38. The information was increasing as urgency increased. That is the wrong direction. The mistake was thinking operators need more information as a situation escalates. What actually changes is not the volume, it is the type. In Rescue, Asha's only job is exceptions. But the panels was still showing her the full sensor data and AI reasoning she needed in Scan. She could not do the Rescue job because the interface was still asking her to do the Scan one. The layout stays constant so she has zero spatial relearning under stress. The information contract changes so she only sees what her current job actually requires.



Fig 07
The same panel position, three completely different information contracts. Rescue state: zero prose,
drone status strips, structure clearance count only.
All actions visible at once β only the right actions shown per phase
4
I mapped every button in the wireframe against what Asha would actually do with it at each phase. Each phase
shows only the actions relevant to that moment. In Scan, the action bar is empty β the system is monitoring,
nothing to decide. In Contain, a two-second hold replaces the "are you sure?" modal: the physical commitment
matches the weight of the decision. In Rescue, there is no primary action at all. An absent button is the signal
the system is executing correctly. Snaps below show each state individually.

Fig 07
Action Bar for Contain and Rescue Phase
The first version had a confirmation modal: "Authorize deployment? CANCEL / CONFIRM." Standard pattern. I looked at it and realised what the modal was actually saying: it was treating the tap before it as a potential accident. The whole dialog existed to compensate for an interaction that should not have been the decision in the first place. The deeper problem: tap, modal, confirm is the same interaction pattern as deleting a file or confirming a meeting invite. Deploying drones over 47 residential structures at 4 AM on a thermal sensor reading should not feel like confirming a meeting invite. The 2-second hold came from a different question: what physical act would actually match the weight of this decision? Not a dialog asking if you are sure. A sustained physical commitment that requires you to hold your intent for the full duration. Releasing early cancels everything.
Authorize containment β press-hold sequence with timing + cancel + feedback
states
AUTHORIZE
Idle
Button enabled, amber
AUTHORIZING...
Pressed
Haptic pulse begins
HOLD Β· 1.2s
Holding
Progress bar fills, 2s total
β AUTHORIZED
Complete Teal pulse. System executes.
Press
Haptic feedback begins
Hold 2s
Authorization fires
Release early
Cancelled, no action taken
CANCELLED
Released early
Resets to idle. No action.
5
Typing during an emergency β team chips selected before the emergency peaks
Typing "Route A, 15 personnel" at 4 AM during an active incident is the wrong interaction model entirely.
Before
Free-text inputs in evacuation modal. Type
route name. Type headcount. "15 personnel"
isn't how ground teams work β and typing
under stress invites errors.
After
Role-based team chips pre-staged in Contain.
Zone assignment filtered by team certification.
By Rescue, the plan is already locked in β the
modal is review, not input.
The insight was that the evacuation plan shouldn't start at the moment of rescue. Asha approves team
assignments during Contain, when she has time to review. The Rescue phase inherits the approved plan and executes it β no typing, no form, no delay.
Fig 08
Team assignment in Contain β chips not text inputs. System filters valid zones per team certification.
By Rescue, this is already locked in.
Panel Β· OLD Team Assignment

Manual Team Assignment for Zones
Panel Β· NEW Team Assignment

Role-based team chips β Contain phase,
zone assignment modal
6
Calling it "AI" β giving it a name, a callsign, and a log entry - TALON
You can't audit "AI." You can audit TALON.
Transparency
Shows why it made a choice β inputs and
contributing factors, not just a conclusion score.
Separation
TALON logs in cyan. Operator logs in amber. The
two are never visually merged.
Accountability
Every TALON action has an author, a timestamp,
and a verb. Every operator action too.
Operator Assistant
An operator can't build trust with "AI." They can build it with a system that signs its own actions.
The moment happened when I was designing the audit trail section and needed to write mock log entries. I typed: "AI detected thermal spike at 03:47. AI classified ignition risk. AI proposed containment plan." Then I tried to imagine an incident review where something had gone wrong. Investigators ask: who authorized what? The log says "AI." Dead end. There is no entity to question, no track record to reference, no pattern to audit across incidents. When I switched to TALON: "TALON detected thermal spike. TALON classified ignition risk." Something changed. Now there is an entity with a history. TALON can be cross-referenced across past incidents, its accuracy measured over time, its failure modes documented. The name felt cosmetic until I was writing log entries. That is when it became structural. An AI is a black box. TALON is a recognisable partner with an auditable record of every call it made.
05 β Design System
Every visual choice connects back to the problem.
This is a dark interface because Sentinel is a night-shift tool. A bright UI at 3:47 AM in a darkened ops
center would actively degrade Asha's vision and wash out the thermal map. Every token β color,
opacity, type weight, border radius β exists to answer one question: what does the operator need to
process faster? Nothing in the system is decorative. If a color doesn't encode operational meaning, it
doesn't ship.
Color is information, not aesthetics
The palette is a severity language. Asha never has to read a label to know how urgent something is β
the color tells her before the words do. TALON's actions are always cyan. Asha's decisions are always
amber or white. System failures escalate through the same gradient. One glance at any screen tells
you who's acting and how critical it is.
Color tokens β what each one means operationally
Cyan
TALON is acting
Teal
Nominal Β· safe
Amber
Asha decides
Yellow
Watch Β· degraded
Red
Critical Β· act now
Neutral
System chrome
Glass panels β depth without distraction
Panels uses frosted glass over the 3D terrain. The map is always visible beneath the data β Asha never loses spatial context, even when reading the Intel panel. Panel borders are 12% white opacity: visible enough to define edges, invisible enough to not compete with the data inside them.
Component states encode phase, not just interaction
A traditional design system documents hover, active, disabled. Sentinel's components have a second
axis: what phase is the system in? The same action tile renders differently in Contain vs. Rescue β not
because the component changed, but because the operator's job changed. Components carry phase
context so Asha doesn't have to remember it.
Action Tile States
Available
Ready
Recommended
TALON suggests
Complete
Done
Disabled
Gated
Panel Density by Phase
Scan
Full context
Verify
Key evidence
Contain
Headlines only
Rescue
Exceptions only
The Global Command Anchor β navbar as information design
A standard product navbar shows a logo and maybe a user avatar. This one is a live telemetry strip.
Asha's peripheral awareness of system health, operational mode, and active metrics lives here β
persistent across every phase, every panel, every decision.

Left to right: brand mark Β· operational mode + scene detail Β· phase-aware metrics that change per mode (Scan shows system status; Contain/Rescue shows structures at risk, airborne assets, wind vector) Β· system controls (Incidents, Notifications) Β· live UTC clock + operator badge. Every element is always visible β Asha never navigates to find system state.

Design decisons principle: If a visual element doesn't help Asha make a faster or more accurate decision, it
doesn't exist in the interface. There are no brand illustrations, no decorative gradients, no rounded
avatars. The glass surfaces, the color tokens, the tabular numerics β all of it is load-bearing.
07 β The 90-Second Window
π©βπ»
Operator Persona
Asha Rao β Regional Command Center Operator
Not a pilot. The decision bridge between TALON's autonomous systems and ground teams, fire crews, and
evacuation routes. Three authorization gates. Night shift. Altadena sector.
How control passes between TALON and the operator.
System authority loop β TALON β Operator β Execution
TALON
Gather +
Classify
Sensor fusion,
ignition
probability,
terrain model,
spread
estimate
Propose plan
Operator
Gate
Review +
Authorize
3 explicit
gates:
dispatch,
confirm,
authorize
containment
Authorization
Execution
Deploy +
Monitor
Drone
deployment,
evac routing,
field intel relay,
exception
surfacing
Log + Audit
Audit Trail
Timestamped
log
Every TALON
action in cyan,
every operator
action in white.
Always
separated.
Override flow: operator can reassert control at any point β TALON adjusts plan around active deployments
See the full 90 seconds attempt
The structure: TALON gathers data, models the situation, and proposes a plan. Asha makes three
explicit decisions β dispatch, confirm, authorize. Everything else is autonomous. Click through each
phase to see exactly what's on screen and what's being decided.
Scan
0β15s
Verify
15β45s
Contain
45β75s
Rescue
75β90s
Scan Β· Alert
03:47:03 β 03:47:45 UTC
0 β 15 seconds
Phase 01 Β· Screen recording
Alert modal appearing over live map
[ Replace with looping .mp4 ]
TALON β working silently
Thermal spike at Grid 4C. Cross-references satellite IR, ground sensors, controlled burn database. Runs
ignition probability classification β
prior burn history for this grid, wind NE 18mph, thermal delta above
seasonal baseline.
At 03:47:18, corroboration crosses threshold.
Asha β interrupted, not disoriented
Alert arrives as a modal layer over the map she's already watching. Context intact. She sees TALON's
reasoning β the inputs that produced the classification, not just a score.
Asha decides
A
Dispatch Scout-02 to Grid 4C β drone routes out, live evidence collection begins
or
B
Dismiss + notify ground team β local fire dept or police route to investigate on foot
Screen Β· Scan + Alert
Alert interrupt modal over live map β SCAN phase
[ Insert: 1700Γ900 screenshot ]
Fig 02
The first interrupt β modal over live map, TALON classification inputs visible, two-path decision.
β Previous
Phase 1 of 4
Next phase β
09 β What I'd Do Differently
Honest about what's unfinished β and what's intentional.
What "90 seconds" actually means
The 90-second target is detection to authorized response handoff β not full evacuation, not every drone on station, not 47 households cleared. The clock ends when TALON has delivered a coordinated briefing package to the fire department and Asha has authorized the containment plan. Everything after that is response execution time, and that isn't a design problem Sentinel can solve.
The brief's own scenario makes this clear: fire department receives GPS coordinates, thermal map, and approach route at the 108-second mark. Evacuation alerts go out at 3:49:05. The brief itself doesn't hit 90 seconds to full response. It hits 90 seconds to the decision and handoff that makes a coordinated response possible. That's the target Sentinel is designed for.
Why the human stays in the loop β a deliberate choice, not a limitation
Full autonomy would be faster. It would also mean drones deploying over a residential area at 3 AM based entirely on a thermal sensor reading that might be a controlled burn, a faulty sensor, or a false pattern match. No human saw it. No one authorized it. When something goes wrong β and at scale, it will β there's no accountability chain and no legal clarity.
The regulatory reality alone makes full autonomy non-deployable today. FAA airspace rules, liability law, and municipal emergency response protocols don't accommodate "the AI decided and launched." The human gate in Sentinel isn't friction introduced by a cautious designer. It's the thing that makes this system legally operable and auditable in a real emergency.
This is also why TALON proposes and Asha authorizes β every time, without exception. As TALON's models improve on real incident data and operator override patterns reveal which decisions are consistently approved unchanged, the authority boundary can shift. But that delegation needs to be earned through a track record, not assumed from day one. The audit trail Sentinel generates now is the training signal that makes safer delegation possible later.
Edge cases the 90-second window doesn't survive
Drone dock distance is the biggest variable. If the nearest dock is 4 kilometers from Grid 4C, the Verify phase alone blows the window before Asha makes a single decision. Heavy network latency in smoke-filled terrain degrades TALON's sensor fusion. An operator already managing a concurrent active incident can't give a new alert full attention in the first 15 seconds. These aren't design failures β they're infrastructure and operational constraints the interface can surface but can't control. The chaos states section exists precisely because the interface needs to stay coherent when those constraints hit mid-response.
On validating the 90-second target
I ran through the flow myself and estimated that a trained operator, familiar with TALON's classification language and the hold gesture, would clear the three decision points in roughly 90 seconds under normal conditions. That's an assumption, not a measurement. I'm not a trained dispatcher, and I'm not under real operational pressure. The honest version of this claim is: the interface is designed so that the decisions themselves take seconds, not minutes, when the operator knows the system. Whether that holds under field conditions is what controlled testing with actual operators would need to close.
If this shipped, I'd measure: Time from first alert to operator decision. How often dismissed alerts became real incidents. Operator override rate β too low might mean Asha is rubber-stamping plans she hasn't fully reviewed. How long non-critical exceptions sit in the feed before someone acknowledges them.
09 β Designing for Chaos
When the happy path dies.
A system designed only for perfect conditions is a liability.
A wildfire at 4 AM doesn't give you clean inputs. Smoke kills camera feeds. Two sensors contradict each other. Three things fail at once. I designed for these scenarios late, but they're where the most honest design thinking happened. How a system handles failure tells you more about its design than how it handles success.
π‘
Chaos 01 Β· Hardware
Signal degradation β the "blind" drone
What happens
Dense smoke severs camera feeds at exactly the moments they're needed. Most systems: "Connection
Lost" modal, gap in map.
TALON fallback
No blocking error. TALON falls back to LiDAR + terrain model reconstruction automatically. Asha loses
video β keeps spatial awareness, drone position, telemetry. Degraded state surfaces as a status chip in
the left panel. Mission continues.

ποΈ
Chaos 02 Β· Mid-Brief Correction
Operator needs to adjust a mission after TALON presents the brief
What happens
The Mission Brief modal opens for "Stage Perimeter." TALON recommends Forest North at 40m AGL.
Asha can see the wind data has shifted since the plan was generated. She needs to relocate the hold to
Buffer East and drop altitude β but the modal only shows Approve or Cancel.
Designed response
A voice input channel built into the brief itself. A "π Change Plan" button sits in the modal footer
alongside Cancel and Approve & Deploy. Asha clicks it and speaks the correction. TALON reprocesses in
real-time β the modal transitions through listening (waveform animation), processing (recomputing),
and responded states inline. The badge updates to "TALON Revised Plan Β· Operator Override," the
rationale text rewrites to reflect the correction, and the confirm button becomes "Deploy Revised Plan"
in amber. The operator reviews what TALON understood before a single drone moves. Voice input
needs a review step before execution β TALON never acts on a voice command it hasn't confirmed back.


TALON listenβs to Ashaβs brief about the change in a particular action and updates accordingly


π
Chaos 03 Β· Plan Disagreement
Operator rejects TALON's recommended containment plan
What happens
TALON proposes deploying to Forest North first. Asha knows the wind shifted northeast β the buffer
corridor is now the correct anchor. The recommended plan is wrong. She needs to change it without
losing time or re-entering parameters from scratch.
Designed response
An alternate plan surface, not a form. TALON has already pre-computed the buffer-first scenario before Asha ever saw the recommendation. She taps new brief, speaks it, and TALON reviews the alternate plan summary β zone priorities reshuffled, drone positions adjusted β and authorizes from the same action bar. No blank fields. No re-entry.



π
Chaos 04 Β· Autonomous Handoff
Drone hits 14% battery mid-rescue β TALON acts without waiting
What happens
Lidar-02 is holding the perimeter monitor position during active rescue. Battery drops to 14%. There is no
time for Asha to manually reassign coverage β the perimeter gap would be immediate.
Designed response
TALON initiates the handoff autonomously β then narrates it. A floating interrupt card appears over
the map (not replacing it β Asha keeps full spatial awareness). Two-column layout: "RECALLING: Lidar-
02 Β· 14% Β· RTB" and "TAKING OVER: Scout-03 Β· En route Β· ETA 00:42." The perimeter drone begins its
return. The substitute is already moving. Asha didn't manage this β she reads it happening.
This is the
one moment TALON acts without asking.
The design rule: autonomous battery handoffs are
mechanical decisions the system makes faster than a human can. But the narration is immediate and the
operator retains the option to stand down the entire mission if the handoff doesn't satisfy her.

Infrastructure failures escalate through three severity grades. Network degraded (340ms latency):
thin yellow banner, operations valid, feeds may arrive late, nothing locks. Satellite feed offline: amber
banner with a hard gate β AUTHORIZE is disabled until the link restores, because authorizing
containment without spatial corroboration is a decision the system won't accelerate. Total
infrastructure failure β TALON offline, satellite gone, all drone telemetry timed out β replaces the
entire interface with a full-screen manual protocol: the fallback radio channel, the regional dispatch
landline number, the last known system state, and a single restart button. Yellow means watch it.
Amber means you're constrained. A red full-screen border means put down the interface and pick up
the radio.

What if the Operator dismisses an alert β fire spreads anyway
Rejected
The System Override. System
secretly re-activates containment
without telling Asha.
Chosen
Re-Escalation as New Event.
Spread triggers a "Critical
Update" interrupt β fresh,
conscious decision required.
Why
If the system can secretly
override Asha, Asha is the illusion
of control. The system can
challenge her with new data. It
cannot steal her authority.
Last resort: ABORT MISSION β always in the action bar, always one tap, always the same position, no
confirmation modal. When something is catastrophically wrong, Asha should not be hunting through
menus.

08 β Visual Architecture: Engineering "Visual Silence"
The map and the UI are two separate things. That separation is what
makes both work.
The map is Reality β 3D. Photorealistic terrain, fire volume, wind-constrained topography. This layer
communicates physical threat viscerally. The UI is Intelligence β 2D. Drone positions, zone
boundaries, spread estimates as sharp scalable vectors floating above. Neither layer competes for the
same visual depth plane β operator can instantly distinguish physical environment from system
proposals.
This separation directly enables the smoke degradation fallback: when optical feed is lost, TALON
switches to LiDAR β Asha loses video without losing spatial awareness. The Intelligence layer is fully
independent of the camera feed.

MAP Canvas
Reality/Intelligence split in practice β 3D terrain communicates threat, 2D vectors communicate system state.
The interface disappears during action. What
remains is exactly the decision the operator needs
to make β nothing more.
LokAI β
Feul β
Portfolio β