🖥️ Leverage analysis
Generated 2026-04-20T19:42:35.849656Z
Camps in scope
Rankings
Friction semantic: 1 = no friction, 0 = fully blocked. Harm and friction share polarity: 1 = no harm, 0 = maximum harm. Rankings sort by net_composite = leverage × mean(friction) × mean(suffering) × mean(harm_robustness) --- the -SUFFERING EV net of harm caused if the intervention lands. viability and suffering_composite stay visible so the marginal effect of each multiplier is legible.
-
Deploy frontier AI (structure prediction, candidate screening, trial simulation) inside drug discovery and therapeutic development pipelines targeting neglected infectious disease, antimicrobial resistance, and LMIC-priority therapeutics.
leverage 0.55 · robustness 0.710 · harm_robustness 0.800 · suffering 0.420 (viability 0.391 → suffering_composite 0.164 → net 0.131) -
Scale AI-assisted mental health triage, initial-line support, and care-navigation tooling targeted at the ~70% of global mental-health burden currently untreated, with integration into public health systems and evaluation against clinical outcomes.
leverage 0.5 · robustness 0.650 · harm_robustness 0.758 · suffering 0.290 (viability 0.325 → suffering_composite 0.094 → net 0.071) -
Scale funding for interpretability and alignment research.
leverage 0.6 · robustness 0.820 · harm_robustness 0.850 · suffering 0.160 (viability 0.492 → suffering_composite 0.079 → net 0.067) -
Expand frontier-lab compute capacity (chips, datacenters, networking).
leverage 0.85 · robustness 0.660 · harm_robustness 0.350 · suffering 0.280 (viability 0.561 → suffering_composite 0.157 → net 0.055) -
Accelerate alternative-protein development (precision fermentation, cultivated meat, plant-based) with AI-driven strain engineering, scaffolding optimization, and supply-chain cost-down, targeting displacement of factory-farm protein at scale.
leverage 0.45 · robustness 0.570 · harm_robustness 0.708 · suffering 0.290 (viability 0.257 → suffering_composite 0.074 → net 0.053) -
Accelerate grid and generation buildout (permitting reform, interconnection, new generation).
leverage 0.75 · robustness 0.560 · harm_robustness 0.533 · suffering 0.220 (viability 0.420 → suffering_composite 0.092 → net 0.049) -
Invest in AI workforce training and retraining programs.
leverage 0.35 · robustness 0.760 · harm_robustness 0.833 · suffering 0.200 (viability 0.266 → suffering_composite 0.053 → net 0.044)
Coalition analyses
Regulation (0.4) is the binding constraint --- FDA/EMA trial pipelines, not compute or public acceptance, decide whether AI-accelerated candidates reach patients. Suffering scores are honest: disease=0.85 and mortality=0.7 track the actual mechanism (AMR, neglected infectious disease, LMIC therapeutics), and the near-zero animal/mental-health weights correctly reflect that this intervention doesn't touch those layers. Harm scores look roughly right but harm_concentration=0.7 is slightly optimistic given that frontier drug-discovery pipelines consolidate inside a handful of pharma-lab partnerships; the IP capture risk is larger than a 0.7 implies if LMIC access isn't contractually baked in.
Regulation (0.45) and enterprise absorption (0.55) co-bind --- clinical-outcome evaluation plus public-health-system procurement are both slow, and a single high-profile iatrogenic case freezes the rollout. suffering_mental_health=0.7 is defensible given the ~70% untreated global burden, but suffering_mortality=0.3 overstates the link --- triage tools rarely catch acute suicidality well and the evidence base is thin. Harm scores under-count harm_displacement for therapists and peer-support workers; 0.7 is generous given that triage-layer automation is exactly where labor substitution bites first, and harm_lock_in=0.5 correctly flags that public-system contracts with frontier labs are sticky.
Friction is genuinely low across the board (0.82 robustness) --- no camp actively opposes interpretability work. The suffering numerator is the honest weak point: 0.16 is correct because alignment doesn't directly reduce suffering, it reduces variance on catastrophic tail outcomes and enables safe deployment of the interventions above it. Scoring it as a direct suffering-reducer mis-categorizes the mechanism. Harm scores at 0.85 are accurate --- the intervention consumes researcher-time, not water/land/labor --- but this is exactly the intervention whose net_composite understates its true role as a gating condition for everything else on the list.
Grid (0.4) is the hard binding constraint --- interconnection queues and generation lag, not capex or chips, decide buildout pace; this is also why intv_grid sits adjacent in the ranking as a prerequisite. Suffering scores are weak-to-fabricated: compute doesn't reduce suffering, it enables interventions that might. Crediting it 0.4 on disease and 0.4 on mortality double-counts the suffering that intv_drug_discovery already claims. Harm scores are the real problem --- harm_water=0.2, harm_land=0.5, harm_concentration=0.2, harm_extraction=0.3 all under-count first-order costs: datacenter water draw in arid regions, frontier-lab power concentration, and the oligopoly dynamics of the compute stack are exactly the harms this intervention maximizes. mean_harm_robustness=0.35 correctly penalizes it but the individual low scores hide how many separate veto points this triggers.
Public acceptance (0.35) is the binding constraint --- regulatory approval is catching up, cost-down is tractable, but cultural rejection of cultivated meat in key markets (Italy, Florida, Texas bans) freezes deployment regardless of technical readiness. suffering_animal=0.85 is correct and this is the only intervention on the list that seriously prices the factory-farming numerator. Harm scores look roughly honest; harm_displacement=0.7 correctly flags that this intervention *is* designed to displace ranching and industrial meat labor at scale, and that coalition cost is real.
Regulation (0.3) is the binding constraint and it's severe --- NEPA, interconnection queues, and state PUCs are where grid buildout dies, not capex. suffering_reduction_scores at 0.1–0.3 across the board are overstated because grid buildout is an enabler, not a direct suffering-reducer; it's being credited for downstream interventions. Harm scores under-count harm_land (0.3 is honest) and harm_concentration (0.7 is optimistic --- utility-scale buildout consolidates generation into a narrower set of IPPs and hyperscaler PPAs). Like compute, this intervention's real value is as a prerequisite, and the ranking can't express that cleanly.
Enterprise absorption (0.5) is the binding constraint --- training programs that aren't tied to actual hiring pipelines produce credentialed unemployed, which is the historical failure mode of every prior retraining wave (NAFTA TAA, coal country). suffering scores are thin: suffering_poverty=0.3 and suffering_mental_health=0.3 are defensible *if* the programs actually place workers, but the composite doesn't price the placement failure rate. Harm scores are correctly high --- training has few first-order externalities --- but the intervention's real weakness is mechanism efficacy, not harm, and the ranking doesn't surface that.
Ranking blindspots
-
Ranked by direct suffering numerator (0.16) when its actual role is as a gating enabler for every higher-ranked intervention; net_composite under-prices it by treating it as a terminal rather than instrumental node.
-
suffering_reduction_scores credit compute for disease/mortality layers it does not directly touch --- those belong to intv_drug_discovery downstream --- producing double-counting that inflates its net_composite above where a clean accounting would place it.
-
mean_harm_robustness=0.35 averages four separate near-veto harms (water 0.2, concentration 0.2, extraction 0.3, lock_in 0.3) into a single scalar that hides the fact that each is a coalition-level veto point, not a graceful degradation.
-
Crediting suffering_reduction for an enabler intervention overstates its direct numerator; the honest scoring is near-zero direct suffering reduction and high instrumental weight, which the composite cannot express.
-
harm_concentration=0.7 under-counts the pharma-IP capture risk that determines whether LMIC access actually materializes; the suffering numerator assumes distribution that the harm side doesn't price.
-
Ranked mid-pack despite being the only intervention with a serious animal-suffering numerator; the composite's uniform weighting across suffering layers hides that this is the singular entry addressing the largest suffering population by count.
-
harm_displacement=0.7 under-counts therapist and peer-support labor substitution, which is exactly the layer triage tools target; the coalition cost with camp_displaced_workers is larger than the score implies.
Contested claims
DoD obligated AI-related contract spending rose substantially 2022-2025, driven by JWCC, Project Maven, and CDAO-managed pilots; precise totals are hampered by inconsistent AI tagging on contract line items.
- Artificial Intelligence and National Security (CRS Report R45178) modeled_projectionweight0.80
locator: AI funding appendix; DoD budget rollups
- USASpending.gov federal contract awards direct_measurementweight0.85
locator: DoD AI-tagged obligations 2022-2025
- The Intercept coverage of Palantir contracts and DoD AI programs journalistic_reportweight0.55
locator: Investigative pieces on DoD AI pilot failures and miscategorization
- Artificial Intelligence: DoD Needs Department-Wide Guidance to Inform Acquisitions (GAO-22-105834 and follow-ups) direct_measurementweight0.75
locator: Summary findings on acquisition-pace gaps
No other pure-play US defense-AI software vendor has matched Palantir's contract backlog or combatant-command integration depth; cloud-provider primes (AWS, Microsoft, Google, Oracle via JWCC) supply infrastructure, not mission-software integration.
- weight0.75
locator: Vendor-landscape discussion
- Palantir Technologies Inc. Form 10-K Annual Report (FY 2024) primary_testimonyweight0.60
locator: Competition section, Item 1
- The Intercept coverage of Palantir contracts and DoD AI programs journalistic_reportweight0.50
locator: Coverage framing Palantir as over-sold relative to internal-tool alternatives
Credible 2030 forecasts for US datacenter share of electricity consumption diverge by more than 2x --- from ~4.6% (IEA/EPRI conservative) to ~9% (Goldman Sachs, EPRI high scenario) --- reflecting genuine uncertainty, not measurement error.
- Powering Intelligence: Analyzing Artificial Intelligence and Data Center Energy Consumption modeled_projectionweight0.85
locator: Scenario table: 4.6%-9.1% by 2030
- 2025/2026 Base Residual Auction Results direct_measurementweight0.75
locator: 2025/2026 BRA clearing results
- Generational growth: AI, data centers and the coming US power demand surge modeled_projectionweight0.70
locator: Executive summary; 160% growth figure
- Electricity 2024 --- Analysis and Forecast to 2026 modeled_projectionweight0.80
locator: Analysing Electricity Demand; data centres chapter
Frontier-lab and big-tech employees have episodically resisted DoD contracts (Google Maven 2018, Microsoft IVAS 2019, Microsoft/OpenAI IDF deployments 2024), producing temporary pauses but no sustained shift in vendor willingness.
- Google employee open letter opposing Project Maven primary_testimonyweight0.90
locator: Open letter and subsequent Google announcement
- Microsoft employee open letter opposing HoloLens/IVAS contract primary_testimonyweight0.85
locator: Employee open letter, February 2019
- Coverage of OpenAI and Microsoft AI use by Israeli military, 2024 journalistic_reportweight0.75
locator: OpenAI military-use policy-change coverage, 2024
- Alex Karp public interviews and op-eds, 2023-2024 primary_testimonyweight0.50
locator: Karp interviews dismissing employee resistance as inconsequential