How we look at upstream decisions.

The aim is to match specific field behavior to the decision points that made it likely, not to re-litigate every design choice.

What we look at

  1. Where risk was consciously parked. Design reviews, trade-off notes, and “non-blocking” concerns that were accepted to keep the program moving.
  2. Interfaces nobody fully owned. Mechanical joints, electrical boundaries, thermal paths, and software-hardware seams where assumptions span teams.
  3. Field reality that diverged from models. How the system is actually operated, maintained, and misused versus what the early models assumed.
  4. Paths where variability piles up. Tolerance chains, supplier spread, assembly steps, calibration, and control loops that quietly widen the state space.

What we intentionally ignore

When we say “do not change this”

There are parts of a system that look awkward but are now load-bearing in ways nobody expected. We call out these areas explicitly when:

In those cases, we document why they must remain fixed while other changes are explored around them.

When we say “this will bite you later”

We flag decisions that are not yet failing in the field but are already structurally fragile. Typical patterns include:

These are written down plainly so leadership can decide whether to defer, accept, or address them now.