Technical Specialist, Application Security & AI Governance

Lucid Motors

About the position

Responsibilities

  • Serve as the cybersecurity point of contact for application teams, ensuring security is integrated across the software development lifecycle (SDLC).
  • Partner with engineers, product managers, and ML practitioners to ensure security controls are integrated into software and AI workflows.
  • Champion secure-by-design principles for applications that process sensitive data or leverage AI models.
  • Conduct threat modeling, secure design reviews, and security assessments for applications, APIs, and AI-enabled features.
  • Provide security architecture guidance for cloud-native applications, containers, microservices, and AI/ML components.
  • Advise on securing LLM-based and data-driven applications, addressing risks such as prompt injection, model data leakage, API abuse, and insecure endpoints.
  • Conduct in-depth security reviews for critical applications and AI use cases, ensuring alignment with business risk tolerance and regulatory expectations.
  • Collaborate with engineering and architecture leaders to ensure application and AI-related risks are identified early and remediated efficiently.
  • Partner with GRC and Legal to assess compliance risks related to software development, data handling, and AI governance.
  • Track, prioritize, and drive the remediation of security findings, helping teams improve measurable outcomes and reduce vulnerabilities.
  • Develop and maintain application security standards, secure coding practices, and design patterns.
  • Shape policies and controls related to secure development, data usage in applications, and responsible AI integration.
  • Help teams adopt DevSecOps practices and integrate tools such as SAST, DAST, SCA, secrets scanning, and container security.

Requirements

  • 8+ years of experience in cybersecurity, with a strong emphasis on application security and secure development practices.
  • In-depth knowledge of web application security, API security, DevSecOps, and secure cloud architectures.
  • Experience with secure coding, threat modeling, vulnerability management, and security reviews across the SDLC.
  • Familiarity with securing AI/ML-enabled applications, including model input validation, prompt security, and API protection.
  • Strong communication and stakeholder engagement skills; able to influence without authority.

Nice-to-haves

  • Certifications such as CSSLP, CISSP, OSWE, GWAPT, or relevant cloud/AI security credentials.
  • Experience working with LLMs and generative AI models in production environments.
  • Knowledge of data security tools and how they integrate into application security strategies (e.g., DSPM, DLP, insider risk platforms).
  • Familiarity with regulatory and industry frameworks (e.g., NIST AI RMF, Secure by Design, ISO/IEC 42001, EU AI Act).

Job Alerts

Get notified when new positions matching your interests become available at Gen AI Careers.

Need Help?

Questions about our hiring process or want to learn more about working with us?