
AI Security Researcher
Agoda
about 2 months ago
Bangkok, Thailand
Mid Level / Senior
Responsibilities
- Design, execute, and document offensive security techniques against AI systems.
- Assess and attempt to compromise Model Context Protocol (MCP)–based systems.
- Build and automate security testing workflows for multiple LLM models and APIs.
- Perform offensive security testing and red teaming of AI-driven products.
- Research and analyze security weaknesses in LLMs and Generative AI systems.
- Contribute to the design and testing of safety and security guardrails for AI.
- Propose and evaluate defensive controls to secure AI systems.
- Translate research findings into practical engineering requirements.
- Stay current with emerging AI security standards and threat models.
- Produce clear technical documentation and share knowledge on AI security best practices.
Requirements
- Bachelor's degree in Computer Science or a related field.
- 2-5 years of experience in offensive cybersecurity.
- Good communication skills in English to convey security risks.
- Deep understanding of LLMs and Generative AI architectures.
- Hands-on experience with jailbreaking and red-teaming AI agents.
- Strong background in offensive security, including API security testing.
- Prior experience in red teaming, penetration testing, or adversarial testing.
- Bug bounty or HackerOne experience is a strong plus.
- Scripting knowledge in Python or PowerShell for automation.
Tech Stack
PowerShellPython
Categories
AI & MLSecurity