Protect AI is shaping, defining, and innovating a new category within cybersecurity around the risk and security of AI/ML. Our ML Security Platform enables customers to see, know, and manage security risks to defend against unique AI security threats, and embrace MLSecOps for a safer AI-powered world. This includes a broad set of capabilities including AI supply chain security, Auditable Bill of Materials for AI, ML model scanning, signing, attestation and LLM Security.
Join our team to help us solve this critical need of protecting AI!
Protect AI is seeking an Associate Security Researcher to join our dynamic team. This role is vital for enhancing our security posture and involves a mix of technical challenges and community engagement within the AI/ML security domain. This position is ideal for a web application penetration tester looking to deepen their expertise in AI security.
Contribute to our bug bounty program by triaging reported vulnerabilities, validating them, and coordinating with stakeholders to resolve identified issues.
Develop exploit modules and scanning templates to automate the detection and analysis of vulnerabilities across various platforms.
Engage in original research within the AI/ML technical space to advance the understanding and development of security measures in artificial intelligence.
Write insightful content peices to share findings, educate the community, and position our program as a leader in AI security.
Serve as a primary point of contact for maintainers and researchers participating in our bug bounty program, managing communications and fostering a collaborative environment.
Automate and enhance security processes to improve efficiency and response times within our internal systems.
Stay updated with the latest cybersecurity threats and trends, incorporating new knowledge into our practices and outreach.
Proven experience in web application security, with a solid foundation in penetration testing.
Some Python coding experience is a plus, especially as it relates to security tooling and automation.
Bachelor's degree in Computer Science, Cybersecurity, or a related field, or equivalent practical experience.
At least 2 years of experience in vulnerability management, application security testing, or a similar role.
Knowledge of Linux environments and services. Some familiarity with containerization technologies such as Docker or Kubernetes is a plus.
Previous participation in bug bounty programs, demonstrating a keen understanding of vulnerability management and community engagement.
Excellent interpersonal and communication skills, capable of maintaining professionalism in discussions with outside researchers, maintainers, and organizations.
A passion for AI/ML technology security and a drive to continuously learn and adapt in a rapidly evolving field.
An exciting, collaborative work environment in a fast-growing startup.
Competitive salary and benefits package.
Excellent medical, dental and vision insurance.
Opportunities for professional growth and development including attending and presenting technical talks at meetups and conferences.
A culture that values innovation, accountability, and teamwork.
Opportunities to contribute to our open source projects with thousands of Github stars.
Work with a team of talented and well-accomplished peers from AWS, Microsoft and Oracle Cloud.
Work with best in class tools — M2 Macbook Pro, 34” Monitor, modern tech stack and high quality collaboration tools.
No bureaucracy and legacy systems. You are empowered to innovate and do your best work.
Incredible downtown Seattle office with 180 degree views of the Puget Sound and high quality video conference systems.
Weekly lunch at the office and weekly delivery credits for food delivery services.
Complimentary gym access, secure bike parking on-premise and Orca pass.
Protect AI is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Yearly based
Seattle HQ