Position Expired
This job is no longer accepting applications.
AI Safety Researcher
Trace Machina
About Trace Machina
Trace Machina is revolutionizing the software development lifecycle with NativeLink, a high-performance build caching and remote execution system. NativeLink accelerates software compilation and testing processes while reducing infrastructure costs, allowing organizations to optimize their build workflows. We work with clients of all sizes to help them scale and streamline their build systems efficiently and effectively.
As part of our growth, we are looking for a talented and innovative AI Safety Researcher to join our team. In this role, you will be responsible for researching and ensuring the safety, robustness, and ethical integrity of AI-driven systems, focusing on improving the reliability of automated build and testing processes. You will be at the forefront of making sure our systems are secure, fair, and capable of performing in complex environments.
Job Description
As an AI Safety Researcher at Trace Machina, you will contribute to the development of AI-powered tools and systems that power NativeLink’s build caching and remote execution platform. You will focus on designing safe, reliable, and interpretable machine learning models for optimizing build processes while mitigating any potential risks related to automation and AI in the development lifecycle. You will collaborate closely with engineers and product teams to ensure that safety is prioritized throughout the development and deployment of AI-based solutions.
Job Responsibilities
- Conduct research into AI safety, focusing on robustness, fairness, and interpretability of machine learning models used in build systems
- Develop algorithms and frameworks that ensure the safe deployment of AI-powered automation in software build, testing, and CI/CD workflows
- Work closely with engineering teams to integrate AI safety mechanisms and ensure robust error handling and fault tolerance
- Investigate and mitigate risks associated with AI-driven decision-making in distributed build systems, especially in mission-critical operations
- Contribute to the development of safety-critical AI models for optimizing performance, caching accuracy, and task coordination across various customer environments
- Conduct studies on the ethical implications of AI in software development, ensuring that algorithms used in NativeLink align with responsible AI principles
- Perform in-depth testing, model validation, and risk assessment to ensure AI systems meet reliability and safety standards
- Collaborate with product managers and engineers to translate research findings into practical tools and features for our customers
Required Skills and Experience
- 3+ years of experience in AI/ML research, with a focus on safety, robustness, and interpretability
- Strong background in machine learning theory, with practical experience implementing models and algorithms
- Expertise in AI safety frameworks, fault tolerance, and risk mitigation strategies for AI systems
- Experience with reinforcement learning, adversarial training, and robustness testing of AI models
- Proficiency in programming languages such as Python, C++, or Go, with hands-on experience in AI development libraries (e.g., TensorFlow, PyTorch)
- Strong understanding of AI ethics, fairness, and the impact of machine learning algorithms in real-world applications
- Ability to identify potential safety risks in AI-driven systems and design solutions to address them
- Familiarity with distributed systems, cloud infrastructure, and build/test automation frameworks
- Excellent problem-solving skills, with the ability to work independently and collaboratively in a fast-paced environment
Nice to Have
- Experience with AI safety standards and best practices for building reliable AI models
- Familiarity with the challenges of AI integration into large-scale software systems and CI/CD pipelines
- Knowledge of adversarial machine learning techniques and safe exploration methods
- Publications in AI safety, robustness, or ethics-related fields
Why Join Trace Machina?
- Work at the cutting edge of AI-powered build optimization and testing tools
- Contribute to the safety and reliability of AI-driven systems used by industry-leading customers
- Collaborate with a dynamic, innovative team dedicated to solving complex problems
- Opportunity to shape the future of AI safety in software development
- Competitive salary and benefits package
- Opportunities for personal and professional development
If you’re passionate about AI safety and want to help shape the future of AI-powered software development systems, we’d love to hear from you!
Other Recent Opportunities
AI Language Model Trainer
11/6/2025Outlier
AI Video Model Trainer for Real-Time Lip Syncing
11/6/2025Upwork
AI Model Trainer for ChatGPT and Llama 3
11/6/2025Upwork
Foundry AI Data Engineering integration specialist
11/6/2025Capgemini
API Integration Specialist
11/6/2025Outlier
AI Model Integration Specialist
11/6/2025beBeeMachineLearningOperations
Job Alerts
Get notified when new positions matching your interests become available at Gen AI Careers.
Need Help?
Questions about our hiring process or want to learn more about working with us?