Research Scientist - Interpretability

Stanford University

Position: Research Scientist - Interpretability (1 Year Fixed Term)

Location: Stanford

Research Scientist - Interpretability (1 Year Fixed Term) at Stanford University summary:

The Research Scientist - Interpretability leads initiatives to develop novel methods for analyzing and interpreting foundation models of the brain, bridging artificial intelligence with neuroscience. The role involves designing theoretical frameworks, conducting interpretability studies, and mentoring junior researchers within a multidisciplinary research environment at Stanford University. This position focuses on advancing understanding of neural representations and computational principles underlying natural intelligence using AI and neurotechnology.

The Enigma Project (enigma project.ai) is a research organization based in the Department of Ophthalmology at Stanford University School of Medicine dedicated to understanding the computational principles of natural intelligence using the tools of artificial intelligence. Leveraging recent advances in neurotechnology and machine learning, this project aims to create a foundation model of the brain, capturing the relationship between perception, cognition, behavior, and the activity dynamics of the brain.

This ambitious initiative promises to offer unprecedented insights into the algorithms of the brain while serving as a key resource for aligning artificial intelligence models with human-like neural representations.

As part of this project, we seek talented individuals specializing in mechanistic interpretability to develop novel methods and scalable systems for analyzing and interpreting these models, helping us understand how the brain represents and processes information. The role combines rigorous engineering practices with cutting-edge research in model interpretability, working at the intersection of neuroscience and artificial intelligence.

Role & Responsibilities

  • Lead research initiatives in the mechanistic interpretability of foundation models of the brain
  • Develop novel theoretical frameworks and methods for understanding neural representations
  • Design and guide interpretability studies that bridge artificial and biological neural networks
  • Advanced techniques for circuit discovery, feature visualization, and geometric analysis of high-dimensional neural data
  • Collaborate with neuroscientists to connect interpretability findings with biological principles
  • Mentor junior researchers and engineers in interpretability methods
  • Help shape the research agenda of the interpretability team
  • * - Other duties may also be assigned

What we offer

  • An environment in which to pursue fundamental research questions in AI and neuroscience interpretability
  • Access to unique datasets spanning artificial and biological neural networks
  • State-of-the-art computing infrastructure
  • Competitive salary and benefits package
  • Collaborative environment at the intersection of multiple disciplines
  • Location at Stanford University with access to its world-class research community
  • Application:

In addition to applying to the position, please send your CV and one page interest statement to:

  • * The job duties listed are typical examples of work performed by positions in this job classification and are not designed to contain or be interpreted as a comprehensive inventory for all duties, tasks, and responsibilities. Specific duties and responsibilities may vary depending on department or program needs without changing the general nature and scope of the job or level of responsibility.

Employees may also perform other duties as assigned.

Desired Qualifications

  • Ph.D. in Computer Science, Machine Learning, Computational Neuroscience, or related field plus 2+ years post-Ph.D. research experience
  • At least 2+ years of practical experience in training, fine-tuning, and using multi-modal deep learning models
  • Strong publication record in top-tier machine learning conferences and journals, particularly in areas related to multi-modal modeling
  • Strong programming skills in Python and deep learning frameworks
  • Demonstrated ability to lead research projects and mentor others
  • Ability to work effectively in a collaborative, multidisciplinary environment

Preferred Qualifications

  • Background in theoretical neuroscience or computational neuroscience
  • Experience in processing and analyzing large-scale, high-dimensional data of different sources
  • Experience with cloud computing platforms (e.g., AWS, GCP, Azure) and their machine learning services
  • Familiarity with big data and MLOps platforms (e.g. MLflow, Weights & Biases)
  • Familiarity with training, fine tuning, and quantization of LLMs or multimodal models using common techniques and frameworks (LoRA, PEFT, AWQ, GPTQ, or similar)
  • Experience with large-scale distributed model training frameworks (e.g. Ray, Deep Speed, HF Accelerate, FSDP)

EDUCATION & EXPERIENCE (REQUIRED)

Bachelor's degree and five years of relevant experience, or combination of education and relevant experience.

KNOWLEDGE,

SKILLS AND ABILITIES

(REQUIRED)

  • Expert knowledge of the principles of engineering and related natural sciences.
  • Demonstrated…

Job Alerts

Get notified when new positions matching your interests become available at {organizationName}.

Need Help?

Questions about our hiring process or want to learn more about working with us?