AI Offense-Defense Dynamics Lead Researcher
Job Summary
As AI capabilities rapidly advance, we face a fundamental knowledge gap: we don't yet fully understand the complex dynamics that determine whether AI systems, or even individual capabilities of them, predominantly threaten or protect society. In this role, you'll lead research to decode these offense-defense dynamics, examining how specific attributes of AI technologies influence their propensity to either enhance societal safety or amplify risks. You'll apply interdisciplinary methods to develop quantitative and qualitative frameworks that analyze how AI capabilities proliferate through society as either protective or harmful applications, producing actionable insights for developers, evaluators, standards bodies, and policymakers to anticipate and mitigate risks. This position offers a unique opportunity to shape how society evaluates and governs increasingly powerful AI systems, with direct impact on global efforts to maximize AI's benefits while minimizing risks. This role is 100% remote but requires occasional travel.
About CARMA
The Center for AI Risk Management & Alignment (CARMA) works to help society navigate the complex and potentially catastrophic risks arising from increasingly powerful AI systems. Our mission is specifically to lower the risks to humanity and the biosphere from transformative AI.
We focus on grounding AI risk management in rigorous analysis, developing policy frameworks that squarely address AGI, advancing technical safety approaches, and fostering global perspectives on durable safety. Through these complementary approaches, CARMA aims to provide critical support to society for managing the outsized risks from advanced AI before they materialize.
CARMA is a fiscally-sponsored project of Social & Environmental Entrepreneurs, Inc., a 501(c)(3) nonprofit public benefit corporation.
Responsibilities
• Develop quantitative system dynamics models capturing the interrelationships between technological, social, and institution...