About us
Gather AI is a supply chain robotics company founded by the PhDs from Carnegie Mellon’s Robotics Institute who created the world’s first provably-safe autonomous helicopter. We have developed an Inventory as a Service platform where fully autonomous drones collect warehouse inventory data at a press of a button.
This is an essential problem to solve as the warehouses we serve have typically misplaced over 10% of their inventory worth $10+ million dollars (seriously!). Their current manual techniques for taking inventory are falling down due the e-commerce boom brought on by COVID, and made worse due to the labor shortage and 70% annual staff turnover. Our drones take inventory 15x faster than humans with over 95% accuracy. We deliver this data through our web dashboard, which acts as a DVR for their warehouse where they run their inventory operation. We are the leader in this new market with proven technology. Our drones are live in a dozen warehouses and have scanned over 150k pallet locations.
We are a pure-software robotics company and our key innovation is the world’s only autonomy and machine learning engine that can solve this problem with commodity hardware in GPS-denied environments. That means we avoid all of the hardware development pitfalls of traditional robotics companies and we can scale 10x faster. The robotics industry is starting to enter its “Google era,” and we are leading the charge.
About You
You are a detail oriented, self-directed person that enjoys creating infrastructure-as-code. You are excited about the prospect of working across a broad array of DevOps concerns, including automating deployment and scaling of ML pipelines for our AI and web dashboards, helping lead our teams’ containerization and automated deployment efforts, and building out advanced metrics monitoring infrastructure. Maybe you’ve worked on big projects before/for a big company, or perhaps many small consulting projects where standard infrastructure was needed, or maybe even a startup where you were turning ideas into working software platforms. You are ready for a fresh challenge - to be the person that defines what devops and deployment looks like at a fast-growing, AI- and robotics-centric company. You love test-driving new technologies, and you like the challenge of incorporating them into your organization in a secure, sustainable way.
What You’ll Do
- Identify and implement containerization, networking, and security best practices for our web and ML back-end applications.
- Help us scale up our ML pipeline packaging by improving how we distribute the inference workload to multiple nodes.
- Ensure the reliability and observability of our pipelines by introducing monitoring, metrics, and logging tools.
- Increase our development velocity by leveraging containerization, infrastructure-as-code, and modern CI/CD practices.
- Create tools, automation scripts and processes to manage our ML models and our datasets.
What You’ll Need
- BS in Computer Science/Engineering or equivalent technical experience.
- 10+ years of internet technology work experience, as a programmer or infrastructure-as-code developer.
- Experience deploying containerized services in production.
- Comfortable with cloud technologies, e.g., cloud VMs, databases, blob storage, serverless functions.
- Experience in building/maintaining end to end robust pipelines for services.
- Experience in implementing secure design principles and industry's compliance standards (PCI-DSS, ISO, GDPR, etc.
- Strong familiarity with the GitHub ecosystem and modern CI/CD practices.
- Knowledge of and comfort with cloud compute technologies, including network, data integrity (backup), and security considerations.
- Customer obsession! We are a customer-obsessed company. If you are not already customer-obsessed, expect to become so!
Nice to Have
- 2+ years experience working with production infrastructure-as-code technologies (e.g. AWS CDK, Terraform, Pulumi, etc.)
- AI/ML pipeline management experience
- Deep knowledge and experience in at least one of the major cloud compute platforms (AWS, Azure, and/or Google Cloud) - note that we are currently multi-provider (AWS, Azure.)
- Experience in distributed ML inference with platforms such as AWS Sagemaker, GCP Vertex, Seldon, or Kubeflow.
- Interest and experience in building complete code-to-production pipelines.
- Specific experience building/maintaining metrics and logging systems.
- Familiarity with flexible, cloud based CI/CD tooling, such as GitHub actions.
- Familiarity with clustering tools such as Kubernetes.
- Expertise in ML is not required, but familiarity with ML architectures and lifecycle, especially in computer vision with deep learning is a plus.
Compensation and Benefits
- Salary: 30-50 LPA
- Flexible schedule
- Unlimited paid leave
- Remote work & home office stipend
- Wellness benefits
If this sounds like a good fit we’d love to meet you. Robotics is the future and we’re leading the charge with our software-only business model. Come help us change the world!