LLM Pre-training & Distributed Engineer (AI Infrastructure)
Hyphen Connect is seeking a highly skilled LLM Pre-training & Distributed Systems Engineer to join our AI Infrastructure team in Seattle, USA. This role is essential for orchestrating large-scale machine learning training runs and optimizing distributed infrastructure to support our cutting-edge AI initiatives.
In this position, you will be responsible for orchestrating distributed training runs across 1,000+ GPUs using frameworks such as PyTorch, DeepSpeed, or Megatron-LM. You will optimize networking (InfiniBand/RDMA) and memory management to prevent out-of-memory errors and automate checkpointing and failure recovery during month-long training runs.
The ideal candidate will have deep expertise in 3D parallelism (Data, Tensor, Pipeline) and experience managing SLURM or Kubernetes-based GPU clusters. A strong systems engineering background with proficiency in C++, CUDA, and Python is essential for success in this role.
Hyphen Connect offers a competitive compensation package, including benefits and opportunities for professional growth. Join us to be part of a dynamic team driving innovation in AI infrastructure.