Content Integrity Analyst
About the Team
Trust & Safety Operations is central to protecting OpenAI’s platform, customers, and the public from abuse. We support a diverse customer base -- from individual users and early-stage startups to global enterprises -- across ChatGPT, our API, and new product surfaces as they launch.
Within the Support organization, we partner closely with Product, Engineering, Legal, Policy, Go To Market, and Operations teams to deliver a great user experience at scale while reducing material harm and mitigating catastrophic risks.
About the Role
We’re hiring experienced Trust & Safety / Content Integrity operators who can investigate complex cases, apply and evolve usage policy in real-world scenarios, and help build scalable systems that reduce risk over time. You will contribute as a subject-matter expert (SME) on high-stakes escalations, partnering with cross-functional stakeholders to drive fast, defensible outcomes. You will also help design the processes, tooling, and automation that power safe operations at scale.
This role is ideal for someone who combines strong judgment with sharp analytical instincts, and who enjoys turning ambiguity into clear decisions, repeatable workflows, and durable automation.
Please note: This role may involve handling sensitive content, including material that may be highly confidential, sexual, violent, or otherwise disturbing.
Location: San Francisco, CA (hybrid: 3 days in office/week).
IN THIS ROLE YOU WILL:
- Apply usage policy with rigor and nuance: Interpret and apply OpenAI’s usage policies to complex, novel scenarios; provide clear guidance to customers and internal teams; document edge cases and propose policy refinements.
- Mitigate material harm and catastrophic risks: Triage, assess, and support actions on content and behavior that can drive real-world harm, including high-severity domains; escalate appropriately and help drive cases to resolution.
- Serve as an escalation SME for high-stakes cases: S...