Member of Technical Staff - Inference
BUILDING OPEN SUPERINTELLIGENCE INFRASTRUCTURE
Prime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full rl post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.
We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.
ROLE IMPACT
This is a hybrid position spanning cloud LLM serving, LLM inference optimization and RL systems. You will be working on advancing our ability to evaluate and serve models trained with our RL Lab at scale. The two key areas are:
1. Building the infrastructure to serve LLMs efficiently at scale.
2. Optimization and integration of inference systems into our RL training stack.
CORE TECHNICAL RESPONSIBILITIES
LLM Serving
- Multi‑tenant LLM Serving: Build a multi-tenant LLM serving platform that operates across our cloud GPU fleets.
- GPU‑Aware Scheduling: Design placement and scheduling algorithms for heterogeneous accelerators.
- Resilience & Failover: Implement multi‑region/zone failover and traffic shifting for resilience and cost control.
- Autoscaling & Routing: Build autoscaling, routing, and load balancing to meet throughput/latency SLOs.
- Model Distribution: Optimize model distribution and cold-start times across clusters.
Inference Optimization & Performance
- Framework Development: Integrate and contribute to...