Minimum qualifications:
- Bachelor's degree in Computer Science, Software Engineering, similar technical field, or equivalent practical experience
- 15 years of professional experience in software development
- Experience with large-scale ML Infrastructure, ML or AI for products, or related fields
- Experience growing and scaling teams
Preferred qualifications:
- Experience building ML infrastructure or products with heavy use of AI/ML and large groups of stakeholders or users
- Experience with Responsible AI
- Understanding of ML Systems and Infrastructure for production with technical knowledge to be credible with customers and engineers
- Experience with working with stakeholders to understand their needs and translate them into technical requirements
- Experience in working across highly horizontal areas requiring a high-degree of cross-organizational collaboration and coordination
- Experience with working with stakeholders to understand their needs and translate them into technical requirements
About the job
We are looking for a Principal Engineer to join our AI Data Trust and Safety organization. In this role, you will be responsible for technical design and vision for our ML Lineage & Governance infrastructure and tooling and our Safety, Fairness, & Privacy infrastructure and tooling. You will be deeply involved in the long-term design and implementation of our Trust and Safety efforts that ensures that Google’s models are developed and launched in a compliant, secure manner, have complete lineage, and provide safe and helpful responses to user queries.
The goal of the AI Data organization is to democratize high-quality ML data assets and infrastructure to enable Google to rapidly and iteratively deliver safe, innovative, and impactful product experiences powered by world-class AI models. The AI Trust and Safety organization’s mission is to accelerate trusted AI/ML.
We build scalable and automated infrastructure to manage ML assets at Google from development to launch with full traceability and auditability, without compromising developer velocity. Our RAI infrastructure quickly translates advances in research to production in a scalable manner. The GenAI Safety Platform builds: (1) Responsible Data Flywheel to support discovery, acquisition, quality driven curation, expansion, and storage of datasets for model training and evaluation in a responsible way, (2) Critics Platform that serves safety classifiers/filters for prompts, responses, and training data (across modalities, languages, abuse types), and (3) RAI evals for automated, scalable, low-latency safety and fairness evaluations of foundation and fine-tuned models.
Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
The US base salary range for this full-time position is $278,000-$399,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
We are looking for a Principal Engineer to join our AI Data Trust and Safety organization. In this role, you will be responsible for technical design and vision for our ML Lineage & Governance infrastructure and tooling and our Safety, Fairness, & Privacy infrastructure and tooling. You will be deeply involved in the long-term design and implementation of our Trust and Safety efforts that ensures that Google’s models are developed and launched in a compliant, secure manner, have complete lineage, and provide safe and helpful responses to user queries.
We build scalable and automated infrastructure to manage ML assets at Google from development to launch with full traceability and auditability, without compromising developer velocity. Our RAI infrastructure quickly translates advances in research to production in a scalable manner. The GenAI Safety Platform builds: (1) Responsible Data Flywheel to support discovery, acquisition, quality driven curation, expansion, and storage of datasets for model training and evaluation in a responsible way, (2) Critics Platform that serves safety classifiers/filters for prompts, responses, and training data (across modalities, languages, abuse types), and (3) RAI evals for automated, scalable, low-latency safety and fairness evaluations of foundation and fine-tuned models.
Responsibilities
- Lead the technical design across the team as needed to build comprehensive, automated, and robust ML lineage and governance infrastructure and tooling, as well as safety, fairness, & privacy infrastructure and tooling.
- Work with partners from Google DeepMind and product areas (Ads, Search, YouTube, Cloud, etc) to drive lineage coverage in a seamless manner without compromising developer velocity. Influence partners and stakeholders within and across the organizations to build joint roadmaps and drive outcomes for AI governance, and translating evolving safety research to production use cases.
- Lead, design, and develop the ML lineage and governance strategy and safety, fairness, and privacy strategy in alignment with the AI Data strategy.
- Mentor and train other team members on system design and best practices relevant to the Generative AI space