Essential AI’s mission is to deepen the partnership between humans and computers, unlocking collaborative capabilities that far exceed what could be achieved today. We believe that building delightful end-user experiences requires innovating across the stack - from the UX all the way down to models that achieve the best user value per FLOP.

We believe that a small, focused team of motivated individuals can create outsized breakthroughs. We are building a world-class multi-disciplinary team who are excited to solve hard real-world AI problems. We are well-capitalized and supported by March Capital and Thrive Capital, with participation from AMD, Franklin Venture Partners, Google, KB Investment, NVIDIA.

The Role

The Research Engineer, Pre-Training will be responsible for designing and implementing novel pre-training approaches to create powerful foundation models that can be fine-tuned/further aligned for a variety of downstream tasks. You will work very closely with our Research Scientists to identify key challenges and opportunities, and then develop and test new pre-training techniques and architectures. This may involve exploring different model architectures, training objectives, data sources, and scaling approaches. You will also be responsible for running large-scale experiments, analyzing results, and iterating on your approaches.

What you’ll be working on

  • You will lead or be a core contributor to our research bets that advance the the real-world capabilities of our models.

  • You will collaborate closely with our data and product teams to close the loop between research and product, identify capability gaps and evaluate progress.

  • Design novel pre-training architectures and algorithms to improve model performance and efficiency.

  • Design and execute experiments to evaluate the efficacy of pre-training techniques across various datasets; Analyze experimental results to gain insights into model behavior and identify areas for improvement.

  • Develop efficient and scalable pre-training pipelines to train models on massive amounts of data

  • Implement pre-training models and algorithms; Optimize model performance and scalability for deployment in production environments.

What we are looking for

  • Self-motivated and takes a proactive approach to constantly iterate by continuously experimenting, inferring, and deciding the right set of next experiments.

  • Research experience with a focus on pre-training and building large language models using frameworks such as Megatron, DeepSpeed, MaxText, etc.

  • You have strong ML fundamentals and first principles thinking that guides your approach to research.

  • You have experience of coming up with new methods or improving existing techniques in ML or related fields

  • Proficiency in programming languages such as Python and frameworks such as JAX, PyTorch or TF

  • Experience with data engineering and preprocessing, in particular, optimization of data pipelines, feature engineering, and model evaluation is beneficial.

  • Strong problem solving, analytical, communication, and collaboration skills with the ability to analyze complex datasets and derive actionable insights.

  • Ability to prototype and deploy pre-trained models in production environments

  • You enjoy building things from the ground up in a fast-paced, collaborative environment.

We encourage you to apply for this position even if you don’t check all of the above requirements but want to spend time pushing on these techniques.

We are based in-person in SF. We offer relocation assistance to new employees.

Location

San Francisco

Job Overview
Job Posted:
7 months ago
Job Expires:
Job Type
Full Time

Share This Job: