Skip to content

Complexity Pricing

How do you price uncertainty that comes from structural complexity rather than from the underlying risk itself?

Standard risk pricing assumes you can estimate probability distributions for losses. But complex organizational structures create epistemic uncertainty—uncertainty about your uncertainty.

Risk TypeWhat You KnowHow to Price
Simple riskDistribution of outcomesExpected loss + margin
Parameter uncertaintyDistribution family, uncertain parametersBayesian methods, wider intervals
Structural complexityUnknown interactions, hidden dependencies???

Complexity doesn’t change the expected value of losses—it changes how confident you can be in that estimate.


Traditional insurance uses loading factors to account for uncertainty:

Premium = Expected Loss × (1 + Loading Factor)

Source of LoadingTypical Factor
Administrative costs10-20%
Profit margin5-15%
Parameter uncertainty10-30%
Model uncertainty20-50%
Structural complexity50-300%+

The problem: loading factors are often set by judgment rather than systematic analysis.

Credibility theory balances individual experience against class statistics:

Credibility-weighted estimate = Z × (individual estimate) + (1-Z) × (class estimate)

where Z (credibility factor) depends on:

  • Volume of individual data
  • Variance in individual experience
  • Variance in class experience

Application to complexity: High-complexity organizations have low credibility (Z → 0) because their structure makes historical data less predictive. They’re priced closer to worst-case class estimates.

Robust optimization handles uncertainty sets rather than point estimates:

Minimize worst-case cost over all scenarios in uncertainty set U

For complexity pricing:

  • Larger uncertainty sets for complex structures
  • Premium reflects worst case within the set
  • Complexity score determines set size

Key insight: You’re not pricing expected loss—you’re pricing the worst plausible loss given structural uncertainty.

Software Engineering: Cyclomatic Complexity

Section titled “Software Engineering: Cyclomatic Complexity”

McCabe (1976) defined cyclomatic complexity as the number of linearly independent paths through code:

V(G) = E - N + 2P

where E = edges, N = nodes, P = connected components.

Higher complexity correlates with:

  • More defects
  • Harder to test
  • Higher maintenance costs

Analogy for organizations: Delegation structures have “paths” through authority chains. More paths = harder to predict behavior.


FactorContributionRationale
Delegation depth+2 per layerEach layer adds interpretation variance
Parallel delegates+1 per delegateCoordination uncertainty
Shared resources+2 per shared resourceContention and priority conflicts
Informal relationships+3 per relationshipUndocumented authority creates surprises
Cross-layer dependencies+2 per dependencyBypasses normal authority flow
Ambiguous boundaries+3 per boundaryUnclear who’s responsible
External dependencies+2 per dependencyLess control, less visibility

Static structure isn’t everything. Some complexity emerges from dynamics:

FactorContributionRationale
High turnover+2Institutional knowledge loss
Rapid growth+3Structure lags reality
Recent reorganization+2Transition period uncertainty
Multi-jurisdictional+2 per jurisdictionDifferent rules, enforcement
FactorContributionRationale
Poor documentation+3Can’t verify claims about structure
Audit findings+1 per findingEvidence of hidden issues
Information asymmetry+2Principal can’t observe agent
Opacity of decision-making+3Can’t attribute outcomes to causes

From Complexity Score to Uncertainty Multiplier

Section titled “From Complexity Score to Uncertainty Multiplier”

If we had data on organizational failures vs. complexity scores, we could fit:

Uncertainty Multiplier = f(Complexity Score)

Proposed functional form (requires empirical validation):

σ_multiplier = e^(0.15 × complexity_score)

Complexity ScoreUncertainty Multiplier
21.35× (±35%)
52.1× (±110%)
104.5× (±350%)
159.5× (±850%)
2020× (±1900%)

Why exponential? Each complexity factor potentially:

  1. Introduces new failure modes (additive)
  2. Creates interactions with existing factors (multiplicative)

If interactions dominate, we expect multiplicative (exponential) growth in uncertainty.

Given uncertainty multiplier σ_m:

Premium = Expected Loss × σ_m × Safety Factor

The safety factor depends on insurer risk tolerance and capital requirements.


Price at the upper end of the confidence interval:

Premium = E[Loss] × (1 + k × σ_multiplier)

where k reflects how many standard deviations to cover (typically 1-3).

Advantage: Simple, conservative. Disadvantage: May be too expensive for high complexity.

Load premium proportional to variance:

Premium = E[Loss] + λ × Var[Loss]

For complexity: Var[Loss] = Base_Var × σ_multiplier²

Advantage: Standard actuarial practice. Disadvantage: Requires variance estimate.

Use Expected Shortfall (CVaR) at a high confidence level:

Premium = ES_α(Loss) where α depends on complexity

Complexity Scoreα LevelInterpretation
Low (< 5)95%Standard tail risk
Medium (5-10)99%More conservative
High (10-15)99.9%Near worst-case
Very High (> 15)99.99%Extreme caution

Advantage: Coherent risk measure, tail-focused. Disadvantage: Requires distribution assumptions.

Use robust optimization over uncertainty sets:

Premium = max_{θ ∈ Θ(complexity)} E_θ[Loss]

where Θ(complexity) is the set of plausible models given complexity level.

Advantage: No distribution assumptions. Disadvantage: Defining Θ is challenging.


Structured questionnaire covering:

  • Organization chart analysis
  • Process documentation review
  • Interview key personnel
  • Audit report review
  • External dependency mapping

Use calibrated function (initially judgment-based, refined with data):

if complexity_score < 5:
uncertainty = "low"
multiplier = 1.5
elif complexity_score < 10:
uncertainty = "medium"
multiplier = 3.0
elif complexity_score < 15:
uncertainty = "high"
multiplier = 7.0
else:
uncertainty = "very high"
multiplier = 15.0+

Premium = Base_Premium × multiplier

where Base_Premium is the rate for a simple, well-understood structure.

Step 4: Offer Complexity Reduction Incentives

Section titled “Step 4: Offer Complexity Reduction Incentives”

Show client how premium changes with structural changes:

ChangeComplexity ΔPremium Δ
Document informal relationships-3-15%
Eliminate shared resources-2-10%
Add clear authority boundaries-3-15%
Reduce to 2 layers-2-10%

Organizational failures linked to complexity are rare and idiosyncratic. Building a training dataset is difficult.

We observe organizations that haven’t catastrophically failed. High-complexity survivors may have unobserved mitigating factors.

Complex organizations may differ from simple ones in ways that affect risk independently of complexity.

  1. Historical case studies: Analyze past organizational failures for complexity factors
  2. Simulation: Agent-based models of delegation chains under stress
  3. Expert elicitation: Structured surveys of risk professionals
  4. Regulatory data: Insurance claim data by organizational characteristics
  5. Near-miss analysis: Study incidents that almost became failures

Complexity pricing extends the delegation accounting framework by quantifying the uncertainty term:

Net Delegation Value = Receivable - (Exposure × σ_multiplier) - Costs

The complexity multiplier captures how much to discount the exposure estimate based on structural uncertainty.

Pattern interconnection creates complexity when defense patterns share failure modes. Complexity pricing provides a method to quantify this “correlation tax.”

Compositional risk measures assume you can decompose system risk. Complexity represents the degree to which this decomposition fails—interactions that can’t be captured by analyzing components separately.


  1. Optimal complexity scoring: Which factors matter most? How should they be weighted?

  2. Functional form: Is the uncertainty multiplier truly exponential? Could it be polynomial, logistic, or stepped?

  3. Industry variation: Do complexity factors have different impacts in different domains?

  4. Dynamic complexity: How do you price complexity that changes over time?

  5. Complexity reduction ROI: What’s the most cost-effective way to reduce complexity for insurance purposes?

  6. AI-specific factors: What additional complexity factors apply to AI systems (emergent behavior, capability uncertainty, alignment uncertainty)?


  • Bühlmann & Gisler (2005) - A Course in Credibility Theory
  • Klugman, Panjer, Willmot (2012) - Loss Models
  • Daykin, Pentikäinen, Pesonen (1994) - Practical Risk Theory for Actuaries
  • Ben-Tal, El Ghaoui, Nemirovski (2009) - Robust Optimization
  • Bertsimas & Brown (2009) - “Constructing Uncertainty Sets for Robust Linear Optimization”
  • McCabe (1976) - “A Complexity Measure” - IEEE Transactions on Software Engineering
  • Halstead (1977) - Elements of Software Science
  • Perrow (1984) - Normal Accidents
  • Reason (1997) - Managing the Risks of Organizational Accidents
  • Leveson (2011) - Engineering a Safer World
  • Ellsberg (1961) - “Risk, Ambiguity, and the Savage Axioms”
  • Gilboa & Schmeidler (1989) - “Maxmin Expected Utility”
  • Hansen & Sargent (2001) - “Robust Control and Model Uncertainty”