What You'll Be Doing
- Develop and optimize large language model (LLM) inference frameworks.
- Optimize GPU and CUDA performance to create an industry-leading, high-performance LLM inference engine.
- Design and implement distributed inference infrastructure for LLM.
- Build monitoring and management tools to ensure the reliability and scalability of online inference servers.
- Identify and resolve system inefficiencies and bottlenecks to improve overall system performance.
- Develop tools to analyze bottlenecks and sources of instability, then design and implement solutions.
- Collaborate with product teams to provide solutions that meet their requirements.
Qualifications
- Bachelor’s degree in Computer Science, Computer Engineering, or a relevant technical field, or equivalent practical experience. Experience in ML engineering optimization is preferred.
- Proficient in C/C++, Python, or Rust, with a strong understanding of algorithms and data structures.
- Expertise in GPU high-performance computing optimization using CUDA, with an in-depth understanding of computer architecture. Familiarity with parallel computing optimization, memory access optimization, and low-bit computing.
- Understanding of deep learning algorithms and neural network architectures.
- Familiarity with TensorRT-LLM, ORCA, VLLM, and similar frameworks.
- At least 3 years of experience working in ML infrastructure (e.g., PyTorch, SageMaker, etc.) and a solid understanding of deep learning training frameworks such as PyTorch and TensorFlow.
Preferred Qualifications
- Knowledge of LLM models and experience in accelerating LLM model optimization.
- Familiarity with Rust programming.
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance, and other benefits, as well as flexibility in terms of remote work. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy