Skip to content

ASIL Decomposition (Automotive)

ASIL Decomposition: Automotive Safety Lessons for AI

Section titled “ASIL Decomposition: Automotive Safety Lessons for AI”

ISO 26262 defines Automotive Safety Integrity Levels (ASIL) and rigorous rules for decomposing safety requirements across components. These methods are directly applicable to AI system design.

LevelDescriptionTarget Failure RateExample
QMQuality Management (no safety requirement)Not specifiedComfort features
ASIL ALowest safety10⁻⁶Warning lights
ASIL BLow safety10⁻⁷Cruise control
ASIL CMedium safety10⁻⁷ with higher coverageABS system
ASIL DHighest safety10⁻⁸Steering, braking

Key insight: Higher ASIL requires not just lower failure rates, but more rigorous development processes, testing coverage, and independence requirements.

A safety requirement at ASIL X can be decomposed across redundant components, where each component may have a lower ASIL—provided they are sufficiently independent.

flowchart TD
    subgraph Original["Original Requirement"]
        A[ASIL D Function]
    end
    subgraph Decomposed["Decomposed Implementation"]
        B1[ASIL B Component 1]
        B2[ASIL B Component 2]
    end
    A --> |decompose| B1
    A --> |decompose| B2
    B1 --> |"both must fail"| Combined[Combined: ASIL D]
    B2 --> |"both must fail"| Combined
Original ASILCan Decompose ToRequirement
ASIL DASIL D + QMIndependent
ASIL DASIL C + ASIL AIndependent
ASIL DASIL B + ASIL BIndependent
ASIL CASIL B + ASIL AIndependent
ASIL CASIL A + ASIL AIndependent
ASIL BASIL A + ASIL AIndependent

The math: For AND-gate decomposition (both must fail):

  • P(system_fail) = P(component1_fail) × P(component2_fail)
  • 10⁻⁸ = 10⁻⁴ × 10⁻⁴ (ASIL B × ASIL B ≈ ASIL D)

Decomposition only works if components fail independently. ISO 26262 specifies criteria:

1. Different design:

  • Different algorithms or approaches
  • Not just “copy-paste with different variable names”

2. Different implementation:

  • Different development teams
  • Different programming languages (where practical)
  • Different compilers and toolchains

3. Physical separation:

  • Different processors or ECUs
  • Separate power supplies
  • Isolated communication channels

4. Temporal separation:

  • Different execution timing
  • Watchdog timers between components

Common cause failure analysis must verify that no single event (power spike, software bug, manufacturing defect) can cause both components to fail.

An AI function requiring ASIL D equivalent safety can be decomposed:

flowchart TD
    subgraph Requirement["Safety Requirement: ASIL D"]
        Req["Critical AI Decision<br/>P(fail) &lt; 10⁻⁸"]
    end
    subgraph Implementation["Decomposed Implementation"]
        ML["ML Component<br/>ASIL B equivalent"]
        Rule["Rule-Based Check<br/>ASIL B equivalent"]
    end
    Req --> ML
    Req --> Rule
    ML --> |both must agree| Safe[Safe Output]
    Rule --> |both must agree| Safe

For the decomposition to be valid:

  1. Different design: ML model vs. explicit rules
  2. Different implementation: Neural network vs. deterministic code
  3. Independence verification: Common failure analysis

AI systems face unique independence challenges:

ChallengeAutomotive AnalogAI-Specific Issue
Common training dataCommon supplierBoth models learned same biases
Same architectureSame design patternSame failure modes
Correlated inputsShared sensorSame adversarial input fools both
Distribution shiftEnvironmental changeBoth fail on novel inputs

Mitigation strategies:

  1. Diverse architectures: Transformer + Decision tree + Rule-based
  2. Diverse training: Different datasets, different preprocessing
  3. Diverse modalities: Vision + LIDAR + Radar (different sensors)
  4. Human in loop: Fundamentally different failure modes than AI

Safety requirement: Emergency braking must not fail to activate when needed (ASIL D)

Decomposition:

ComponentASILImplementationRole
Primary detectorASIL BCNN on cameraDetect obstacles
Secondary detectorASIL BLIDAR point cloudConfirm obstacles
Rule-based overrideASIL ADeterministic codePhysics-based checks
Human overrideQMDriver brake pedalFinal authority

Independence analysis:

  • ✅ Different sensors (camera vs LIDAR)
  • ✅ Different algorithms (CNN vs point cloud)
  • ⚠️ Both could fail in heavy fog → add radar as third modality
  • ✅ Rule-based and ML are fundamentally different
  • ✅ Human override is independent of all AI components

Combined ASIL: B(B) = D equivalent (with verified independence)

Adapting ASIL decomposition to Delegation Risk:

For components in series (AND-gate, both must fail for system failure):

DR_system = DR_component1 × DR_component2 / Damage_base

Example:

  • Component 1: P(fail) = 0.01, Damage = 100KDelegationRisk=100K → Delegation Risk = 1,000
  • Component 2: P(fail) = 0.01, Damage = 100KDelegationRisk=100K → Delegation Risk = 1,000
  • Combined (both must fail): P = 0.0001, Damage = 100KDelegationRisk=100K → Delegation Risk = 10

For components in parallel (OR-gate, any failure causes system failure):

DR_system = DR_component1 + DR_component2

Example:

  • Component 1: Delegation Risk = $500
  • Component 2: Delegation Risk = $300
  • Combined (any can fail): Delegation Risk = $800

When a checker doesn’t catch all failures:

DR_residual = DR_component × (1 - coverage)

Example:

  • Component Delegation Risk: $10,000
  • Checker coverage: 95%
  • Residual Delegation Risk: $500

Higher ASIL requires more rigorous verification:

AspectASIL AASIL BASIL CASIL D
Code coverageStatementBranchMC/DCMC/DC
TestingFunctional+ Fault injection+ Stress+ Formal
ReviewInspection+ Walkthrough+ Analysis+ Independent
IndependenceRecommendedRecommendedRequiredRequired
AspectTraditionalAI Equivalent
Code coverageStatement/BranchTest set coverage
Functional testingRequirements-basedBehavior specification
Fault injectionHardware faultsAdversarial examples
Formal verificationMathematical proofCertified robustness
IndependenceDifferent teamsDifferent architectures

When decomposing an AI safety requirement:

  • Define the original safety requirement (target failure rate, damage)
  • Identify decomposition pattern (AND-gate, OR-gate, hybrid)
  • Assign component-level requirements (using ASIL math)
  • Verify independence:
    • Different algorithms/architectures?
    • Different training data/processes?
    • Different failure modes?
    • Physical/temporal separation?
  • Analyze common cause failures:
    • Same adversarial input affects both?
    • Same distribution shift affects both?
    • Same systematic bias?
  • Document decomposition rationale
  • Test combined system (not just components)

ASIL decomposition assumes:

  1. Known failure modes — AI can fail in novel ways
  2. Independent failures — AI systems may have correlated failures
  3. Stable failure rates — AI performance varies with input distribution
  4. Measurable coverage — AI “coverage” is hard to define

Conservative approach for AI:

  • Treat AI components as lower ASIL than calculation suggests
  • Require more diverse redundancy than traditional systems
  • Include non-AI fallbacks (rules, humans) in decomposition
  • Assume independence is imperfect; add margin