AI Security Researcher

May 11

🏡 Remote – New York

Apply Now
Logo of Robust Intelligence

Robust Intelligence

Eliminate machine learning failures with an end-to-end ML Integrity solution

11 - 50

💰 $30M Series B on 2021-12

Description

• Track and analyze emerging threats to AI systems, focusing on AI/ML models, applications, and environments. • Develop and implement detection and mitigation strategies for identified threats, including prototyping new approaches. • Lead comprehensive red-teaming exercises and vulnerability assessments for generative AI technologies, identifying and addressing potential security vulnerabilities. • Develop and maintain security tools and frameworks using Python or Golang. • Curate and generate robust datasets for training ML models. • Author blog posts, white papers, or research papers related to emerging threats in AI security. • Collaborate with cross-functional teams of researchers and engineers to translate research ideas into product features. You'll also have the opportunity to contribute to our overall machine learning culture as an early member of the team.

Requirements

• 3+ years of proven experience • Experience on applied red and/or blue team roles, such as threat intelligence, threat hunting, red teaming, etc. • Strong understanding of common application security vulnerabilities and mitigations • Strong programming skills in generic programming languages such as Python or Golang • Excellent written and verbal communication skills, strong analytical and problem-solving skills • Ability to quickly learn new technologies and concepts and to understand a wide variety of technical challenges to be solved

Benefits

• 11 paid holidays • Generous Accrued Time Off increasing with years of service • Generous paid sick time • Annual day of service

Apply Now
Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or lior@techjobsnewyorkcity.com