ai-for-less-suffering.com

🖥️ Leverage analysis

Generated 2026-04-20T19:42:35.849656Z

Camps in scope

Rankings

Friction semantic: 1 = no friction, 0 = fully blocked. Harm and friction share polarity: 1 = no harm, 0 = maximum harm. Rankings sort by net_composite = leverage × mean(friction) × mean(suffering) × mean(harm_robustness) --- the -SUFFERING EV net of harm caused if the intervention lands. viability and suffering_composite stay visible so the marginal effect of each multiplier is legible.

  1. Deploy frontier AI (structure prediction, candidate screening, trial simulation) inside drug discovery and therapeutic development pipelines targeting neglected infectious disease, antimicrobial resistance, and LMIC-priority therapeutics.

    leverage 0.55 · robustness 0.710 · harm_robustness 0.800 · suffering 0.420 (viability 0.391 → suffering_composite 0.164 → net 0.131)
  2. Scale AI-assisted mental health triage, initial-line support, and care-navigation tooling targeted at the ~70% of global mental-health burden currently untreated, with integration into public health systems and evaluation against clinical outcomes.

    leverage 0.5 · robustness 0.650 · harm_robustness 0.758 · suffering 0.290 (viability 0.325 → suffering_composite 0.094 → net 0.071)
  3. Scale funding for interpretability and alignment research.

    leverage 0.6 · robustness 0.820 · harm_robustness 0.850 · suffering 0.160 (viability 0.492 → suffering_composite 0.079 → net 0.067)
  4. Expand frontier-lab compute capacity (chips, datacenters, networking).

    leverage 0.85 · robustness 0.660 · harm_robustness 0.350 · suffering 0.280 (viability 0.561 → suffering_composite 0.157 → net 0.055)
  5. Accelerate alternative-protein development (precision fermentation, cultivated meat, plant-based) with AI-driven strain engineering, scaffolding optimization, and supply-chain cost-down, targeting displacement of factory-farm protein at scale.

    leverage 0.45 · robustness 0.570 · harm_robustness 0.708 · suffering 0.290 (viability 0.257 → suffering_composite 0.074 → net 0.053)
  6. Accelerate grid and generation buildout (permitting reform, interconnection, new generation).

    leverage 0.75 · robustness 0.560 · harm_robustness 0.533 · suffering 0.220 (viability 0.420 → suffering_composite 0.092 → net 0.049)
  7. Invest in AI workforce training and retraining programs.

    leverage 0.35 · robustness 0.760 · harm_robustness 0.833 · suffering 0.200 (viability 0.266 → suffering_composite 0.053 → net 0.044)

Coalition analyses

Contesting: 📉 X-risk

Regulation (0.4) is the binding constraint --- FDA/EMA trial pipelines, not compute or public acceptance, decide whether AI-accelerated candidates reach patients. Suffering scores are honest: disease=0.85 and mortality=0.7 track the actual mechanism (AMR, neglected infectious disease, LMIC therapeutics), and the near-zero animal/mental-health weights correctly reflect that this intervention doesn't touch those layers. Harm scores look roughly right but harm_concentration=0.7 is slightly optimistic given that frontier drug-discovery pipelines consolidate inside a handful of pharma-lab partnerships; the IP capture risk is larger than a 0.7 implies if LMIC access isn't contractually baked in.

Regulation (0.45) and enterprise absorption (0.55) co-bind --- clinical-outcome evaluation plus public-health-system procurement are both slow, and a single high-profile iatrogenic case freezes the rollout. suffering_mental_health=0.7 is defensible given the ~70% untreated global burden, but suffering_mortality=0.3 overstates the link --- triage tools rarely catch acute suicidality well and the evidence base is thin. Harm scores under-count harm_displacement for therapists and peer-support workers; 0.7 is generous given that triage-layer automation is exactly where labor substitution bites first, and harm_lock_in=0.5 correctly flags that public-system contracts with frontier labs are sticky.

Friction is genuinely low across the board (0.82 robustness) --- no camp actively opposes interpretability work. The suffering numerator is the honest weak point: 0.16 is correct because alignment doesn't directly reduce suffering, it reduces variance on catastrophic tail outcomes and enables safe deployment of the interventions above it. Scoring it as a direct suffering-reducer mis-categorizes the mechanism. Harm scores at 0.85 are accurate --- the intervention consumes researcher-time, not water/land/labor --- but this is exactly the intervention whose net_composite understates its true role as a gating condition for everything else on the list.

Grid (0.4) is the hard binding constraint --- interconnection queues and generation lag, not capex or chips, decide buildout pace; this is also why intv_grid sits adjacent in the ranking as a prerequisite. Suffering scores are weak-to-fabricated: compute doesn't reduce suffering, it enables interventions that might. Crediting it 0.4 on disease and 0.4 on mortality double-counts the suffering that intv_drug_discovery already claims. Harm scores are the real problem --- harm_water=0.2, harm_land=0.5, harm_concentration=0.2, harm_extraction=0.3 all under-count first-order costs: datacenter water draw in arid regions, frontier-lab power concentration, and the oligopoly dynamics of the compute stack are exactly the harms this intervention maximizes. mean_harm_robustness=0.35 correctly penalizes it but the individual low scores hide how many separate veto points this triggers.

Public acceptance (0.35) is the binding constraint --- regulatory approval is catching up, cost-down is tractable, but cultural rejection of cultivated meat in key markets (Italy, Florida, Texas bans) freezes deployment regardless of technical readiness. suffering_animal=0.85 is correct and this is the only intervention on the list that seriously prices the factory-farming numerator. Harm scores look roughly honest; harm_displacement=0.7 correctly flags that this intervention *is* designed to displace ranching and industrial meat labor at scale, and that coalition cost is real.

Regulation (0.3) is the binding constraint and it's severe --- NEPA, interconnection queues, and state PUCs are where grid buildout dies, not capex. suffering_reduction_scores at 0.1–0.3 across the board are overstated because grid buildout is an enabler, not a direct suffering-reducer; it's being credited for downstream interventions. Harm scores under-count harm_land (0.3 is honest) and harm_concentration (0.7 is optimistic --- utility-scale buildout consolidates generation into a narrower set of IPPs and hyperscaler PPAs). Like compute, this intervention's real value is as a prerequisite, and the ranking can't express that cleanly.

Enterprise absorption (0.5) is the binding constraint --- training programs that aren't tied to actual hiring pipelines produce credentialed unemployed, which is the historical failure mode of every prior retraining wave (NAFTA TAA, coal country). suffering scores are thin: suffering_poverty=0.3 and suffering_mental_health=0.3 are defensible *if* the programs actually place workers, but the composite doesn't price the placement failure rate. Harm scores are correctly high --- training has few first-order externalities --- but the intervention's real weakness is mechanism efficacy, not harm, and the ranking doesn't surface that.

Ranking blindspots

Contested claims

DoD obligated AI-related contract spending rose substantially 2022-2025, driven by JWCC, Project Maven, and CDAO-managed pilots; precise totals are hampered by inconsistent AI tagging on contract line items.

supports
contradicts
qualifies

No other pure-play US defense-AI software vendor has matched Palantir's contract backlog or combatant-command integration depth; cloud-provider primes (AWS, Microsoft, Google, Oracle via JWCC) supply infrastructure, not mission-software integration.

supports
contradicts

Credible 2030 forecasts for US datacenter share of electricity consumption diverge by more than 2x --- from ~4.6% (IEA/EPRI conservative) to ~9% (Goldman Sachs, EPRI high scenario) --- reflecting genuine uncertainty, not measurement error.

supports
contradicts
qualifies

Frontier-lab and big-tech employees have episodically resisted DoD contracts (Google Maven 2018, Microsoft IVAS 2019, Microsoft/OpenAI IDF deployments 2024), producing temporary pauses but no sustained shift in vendor willingness.

supports
contradicts