High-definition semantic maps are a critical tool for enabling autonomous driving. We are looking for highly motivated individuals who are passionate about machine learning, computer vision, and machine perception. You will be part of our algorithm team with a diverse group of researchers and engineers, working together to build core machine perception algorithms to understand the natural environments and build semantic maps in an automated way.

In this role, you will:

  • Design and develop novel algorithms and ML models for 3D machine perception in real-world environments.
  • Leverage our large-scale ML infrastructure to speed up development and deployment of ML models.
  • Stay up-to-date with the latest state-of-the-art research in computer vision and machine learning, and apply learned knowledge to improve our systems.
  • Develop metrics and tools to analyze errors and understand improvements of our systems.
  • Coordinate cross-functional initiatives and collaborate with engineers from Mapping, Perception, Data Science, and more.

Qualifications

  • BS, MS, or PhD degree in Computer Science, Math or related fields.
  • 8+ years of industry or academia experience in related fields.
  • Experience with training/deploying Deep Learning models and with frameworks such as PyTorch, TensorFlow or JAX.
  • Experience with production Machine Learning pipelines: dataset creation, training frameworks, metrics pipelines.
  • Experience with 3D computer vision and projective geometry.
  • Proficiency in Python and/or C++.

Bonus Qualifications

  • Conference or journal publications in Computer Vision, Machine Learning or Robotics related venues (CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, ICRA, RSS, etc.).
  • Experience with object detection, object recognition, semantic segmentation and/or semantic scene understanding.
  • Experience with 3D reconstruction, SLAM and/or Structure from Motion (SFM).
  • Experience with high-definition 3D maps.
  • Experience with geographic data and Geographic Information System (GIS).
CompensationThere are three major components to compensation for this position: salary, Amazon Restricted Stock Units (RSUs), and Zoox Stock Appreciation Rights. The salary will range from $196,000 to $278,000. A sign-on bonus may be part of a compensation package. Compensation will vary based on geographic location, job-related knowledge, skills, and experience.  
Zoox also offers a comprehensive package of benefits including paid time off (e.g. sick leave, vacation, bereavement), unpaid time off, Zoox Stock Appreciation Rights, Amazon RSUs, health insurance, long-term care insurance, long-term and short-term disability insurance, and life insurance.
About ZooxZoox is developing the first ground-up, fully autonomous vehicle fleet and the supporting ecosystem required to bring this technology to market. Sitting at the intersection of robotics, machine learning, and design, Zoox aims to provide the next generation of mobility-as-a-service in urban environments. We’re looking for top talent that shares our passion and wants to be part of a fast-moving and highly execution-oriented team.
Follow us on LinkedIn
AccommodationsIf you need an accommodation to participate in the application or interview process please reach out to accommodations@zoox.com or your assigned recruiter.
A Final Note:You do not need to match every listed expectation to apply for this position. Here at Zoox, we know that diverse perspectives foster the innovation we need to be successful, and we are committed to building a team that encompasses a variety of backgrounds, experiences, and skills.

Salary

$196,000 - $278,000

Yearly based

Location

Foster City, CA

Job Overview
Job Posted:
8 months ago
Job Expires:
Job Type
Full Time

Share This Job: