AI Engineer - Robotics Foundation Model
AlldusAbout the Company
- The next era of national security will be shaped by teams that can deploy intelligent machines at scale. We are building a large-scale multimodal model designed to serve as the cognitive backbone for fleets of autonomous robotic systems operating in complex, real-world environments.
- Our platform enables operators to coordinate groups of robots using natural language while those systems independently perceive, reason, and execute tasks in dynamic conditions. This is not incremental autonomy — it is a step-change in how intelligent machines are trained, coordinated, and deployed in mission-critical settings.
- We are an early-stage, venture-backed company operating with urgency and high ownership. This is an opportunity to help define the technical foundation of next-generation autonomous systems from the ground up.
About The Role
- We are seeking an AI Engineer with a strong interest in embodied intelligence and multimodal learning systems.
- You will work across perception, planning, and control, developing and deploying advanced vision-language-action (VLA) models that power robotic platforms in adversarial and unstructured environments. This role bridges research and production — translating cutting-edge ML advances into real-time, field-ready autonomy.
- You should expect rapid iteration, aggressive experimentation, and a high degree of ownership. As an early team member, you will influence architecture, technical direction, and engineering standards.
What You'll Do
- Design, train, and evaluate advanced multimodal models for autonomous robotic systems
- Build scalable systems for multimodal fusion, continual learning, and domain transfer
- Develop memory, coordination, and tool-use capabilities for AI agents
- Convert research prototypes into real-time perception and decision systems suitable for deployment
- Integrate ML pipelines with robotic hardware and onboard autonomy stacks
- Maintain training, simulation, and inference infrastructure
- Run rigorous experiments to measure robustness and performance in unstructured environments
- Stay current on research across vision-language models, reinforcement learning, computer vision, and agent architectures
- Participate in field validation and deployment testing under real operational constraints
Must Have Qualifications
- Must be a US Citizen
- 2+ years of hands-on experience building and deploying machine learning systems
- Experience in robotics, autonomous systems, agentic AI, or other real-time ML applications preferred
- Strong foundation in computer vision, deep learning, multimodal transformers, or agent architectures
- Proficiency in Python and modern ML frameworks such as PyTorch or TensorFlow
- Bonus: experience with JAX, CUDA, distributed training, or low-latency inference optimization
- Working knowledge of fine-tuning, reinforcement learning, and large-scale model training techniques
- Bachelor’s degree in Computer Science, AI, Robotics, or related field; advanced degrees valued
- Proven ability to move from research concepts to production systems in fast-paced environments
We are also looking for researchers to work on agents, multi-modal reasoning, and RL - as well as AI Infrastructure engineers.
Job Type
- Job Type
- Full Time
- Location
- Sunnyvale, CA
Share this job:
