In Azure Artificial Intelligence (AI), we’re ensuring responsible development and deployment of the next generation of cutting-edge generative AI models like GPT-4. If you join our Responsible AI Applied Science team as a Senior Product Manager, you will join an organization committed to partnering with stakeholders across policy, research, applied science and engineering to help every product team in Microsoft ship these evolving technologies responsibly. 

As a Senior Product Manager in Responsible AI Applied Science, you will partner with applied scientists and researchers to develop, implement, and ship advanced multi-modal mitigation solutions for emerging Responsible AI risks including new state of the art AI models and other mitigation techniques like prompt engineering. Additionally, you will be responsible

for designing AI systems that can effectively measure Responsible AI risks at scale. 

This role requires a deep understanding of Generative AI use cases, Responsible AI and/or the Content Moderation and Trust and Safety space, a keen sense of product to identify the right problem and build the right solution, and good technical background to collaborate with scientists and engineers on AI development. 

This role is flexible in that you will be able to partner with your Manager to define the way that you’d like to work, whether that is in the office or from home. 

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. 

In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. 

Responsibilities

As a Senior Product Manager for Responsible AI Applied Science, your primary focus will be to lead the development and implementation of advanced multi-modal measurement and mitigation solutions for emerging Responsible AI (RAI) risks and new content moderation policies. You will collaborate with applied scientists, multiple stakeholders, and cross-functional teams to build AI systems that effectively identify, measure and mitigate new RAI risks. Your key responsibilities will include: 

  • Identify and Measure RAI risks: Lead the effort in defining emerging Responsible AI (RAI) risks and collaborate with stakeholders to develop AI systems that effectively identify and measure new risks cross various modalities and harm categories
  • Develop Mitigation Solutions: Work closely with applied scientists to create and implement advanced multi-modal mitigation solutions for identified RAI risks and new content moderation policies, including AI models and prompt engineering
  • Technical Roadmap: Manage the technical roadmap, from ideation with researchers to seamless deployment in the large-scale RAI Safety System, ensuring the integration of new features with new Generative AI powered products  
  • Cross-Functional Collaboration: Partner with various stakeholders, including Microsoft Research, policy, legal, engineering teams, and other product teams to proactively ensure the adoption of the RAI measure and mitigation systems  
  • Continuous Improvement: Monitor the effectiveness of mitigation solutions and RAI harm measurement systems with a data driven approach, gather insights, and iterate on the development process to continuously enhance the performance and efficacy of the solutions
  • Other: Embody our Culture and Values 

Qualifications

Required/Minimum Qualifications: 

  •  Bachelor’s Degree and 7+ years experience in product/service/project/program management or software development/applied science OR equivalent experience 
  • 2+ years experience managing technical cross-functional and/or cross-team projects  
  • 1+ years of experience in Machine Learning and/or AI  
     

Preferred/Additional Qualifications 

  • Experience in Responsible AI, Digital Safety, or Security  
  • Experience with Generative AI
  • Technical background in Natural Language Processing and/or Computer Vision  
  • Bachelor’s degree or higher in Machine Learning, NLP, Computer Vision, Computational Linguistics  

#IDCAIPlatformHiring


Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: 
 Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. 

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.

Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.

Location

Hyderabad, Telangana, India

Job Overview
Job Posted:
7 months ago
Job Expires:
Job Type
Full Time

Share This Job: