Remote | AI Red-Teamer — Adversarial AI Testing (English) — Up to $111/hr
- business Talent Job Seeker
- directions_car California
- workFull-time
We are sharing a specialised part-time consulting opportunity for professionals with experience in AI red-teaming, cybersecurity, adversarial testing, or socio-technical risk analysis. This role supports a leading AI research lab by identifying vulnerabilities in advanced AI systems and helping generate high-quality human data used to improve AI safety. Experts in this role will test AI systems using adversarial techniques, identify potential weaknesses, and document vulnerabilities through structured testing methodologies. Key Responsibilities Red-team AI models and agents using adversarial testing methods Identify vulnerabilities such as jailbreaks, prompt injections, and misuse scenarios Generate high-quality human data by annotating failures and classifying system vulnerabilities Follow testing frameworks, taxonomies, and benchmarking guidelines to ensure consistent evaluation Produce reproducible reports, datasets, and attack scenarios Document systemic risks and security gaps in AI model behavior Support multiple projects involving LLM safety, misuse prevention, and socio-technical risk analysis Ideal Profile Strong candidates may have: Experience in AI red-teaming, adversarial AI testing, or cybersecurity Familiarity with LLM jailbreaks, prompt injection attacks, or adversarial prompting Experience in penetration testing, exploit analysis, or reverse engineering Strong analytical thinking and vulnerability identification skills Experience documenting findings and communicating risks to technical and non-technical stakeholders Ability to follow structured testing frameworks and evaluation methodologies Strong written communication skills and attention to detail Location eligibility: United States United Kingdom Canada Nice-to-Have Expertise Adversarial machine learning (RLHF/DPO attacks, model extraction, jailbreak datasets) Cybersecurity and penetration testing Socio-technical risk research (bias, misinformation, abuse analysis) Creative adversarial thinking through psychology, writing, or behavioral analysis Why This Opportunity Work at the frontier of AI safety and adversarial testing Help identify vulnerabilities before AI systems reach production Contribute directly to making AI models more robust and trustworthy Collaborate with leading AI labs on high-impact safety research Flexible remote work with competitive compensation Contract Details Independent contractor role Fully remote with flexible scheduling Projects may vary in scope depending on testing requirements Compensation ranges from $50–$111/hr depending on expertise and project scope Weekly payments via Stripe or Wise About the Platform This opportunity is available through a leading AI-driven work platform.
California
app.general.countries.United States
Place of work
Talent Job SeekerCalifornia
app.general.countries.United States
About us
Identifica el mejor Talento con Talent Job Seeker
Job ID: 10471280 / Ref: 0b3d63050d38bb1f44587e858ad4da90