RDQ224R252

While candidates in the listed location(s) are encouraged for this role, candidates in other locations will be considered.

The Responsible AI Team is committed to the development and implementation of AI systems that prioritize fairness, transparency, and accountability. Through rigorous testing, analysis, and research, we aim to identify potential vulnerabilities, biases, and risks associated with AI models, thereby fostering the creation of trustworthy and robust AI solutions. By advocating for responsible AI practices, we strive to contribute to a future where AI technologies benefit society while upholding the highest standards of integrity, privacy, and social responsibility.

The impact you will have:

You will be an important member of the Responsible AI Team at Databricks. The duties of the position involve performing security design reviews and red team engagements on new and existing models and AI systems, as well as performing novel research in the AI security field. 

  • Candidates will be actively following newly published research in the field and have a strong interest in the future of AI security.
  • Conduct Red Team operations on live AI systems in development and production environments, employing adversarial strategies and methods to discover vulnerabilities.
  • Investigate new and emerging threats to ML systems and address them both internally and externally.
  • Create and refine a set of tools, techniques, dashboards and automated processes that can be used to effectively discover and report vulnerabilities in AI systems.
  • Guide model and system development securely through the SDLC.
  • Pioneer best practice and guidance for various facets of ML technology.
  • Collaborate with internal teams to help facilitate advances in our operational security and monitoring procedures.

What we look for:

The ideal candidate will have a strong background in the following areas:

  • Machine Learning and Deep Learning concepts with coding experience using libraries like TensorFlow, PyTorch, or SparkNLP.
  • Expertise on programming languages such as Python or C++ for coding and secure code reviews.
  • Expertise with adversarial machine learning techniques.
  • Knowledge of cybersecurity principles and tools for vulnerability discovery and exploitation
  • Strong problem-solving skills and genuine curiosity to develop novel attack methods against AI systems
  • Excellent verbal and written communication skills, 
  • Strong team player, as the role involves working closely with other security experts and AI researchers
  • Typically 4+ years of experience or advanced degree (MS/PhD)  with 3+ years of experience in the ML domain.
  • BS or higher in Computer Science, or a related field

Benefits

  • Enhanced Parental Leaves
  • Fitness reimbursement
  • Annual career development fund
  • Home office & work headphones reimbursement
  • Employee referral bonus
  • Equity awards
  • Fitness reimbursement
  • Business travel accident insurance
  • Mental wellness resources
  • Health Allowance
  • Life, accident & disability insurance

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on TwitterLinkedIn and Facebook.

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Location

Belgrade, Serbia

Job Overview
Job Posted:
8 months ago
Job Expires:
Job Type
Full Time

Share This Job: