Research Scientist - Mechanistic Interpretability

beBeeInterpretability

Our goal is to establish a foundation model of the brain by leveraging recent advances in neurotechnology and machine learning.

Job Requirements

  • Design scalable analysis pipelines for mechanistic interpretability analyses of large neural networks, ensuring efficient data processing and accurate results.
  • Develop automated feature visualization techniques to understand neural representations, utilizing advanced algorithms and computational methods.
  • Build tools for circuit discovery and geometric analysis of population activity, providing valuable insights into brain function and behavior.
  • Create reproducible analysis workflows that can handle large-scale neural data, applying principles of software engineering and data science.
  • Collaborate with researchers to design and implement novel interpretability methods, fostering a culture of innovation and collaboration.
  • Maintain distributed computing infrastructure for running interpretability analyses, ensuring scalability and reliability.
  • Document findings through technical reports and visualization tools, facilitating knowledge sharing and dissemination.

Job Alerts

Get notified when new positions matching your interests become available at {organizationName}.

Need Help?

Questions about our hiring process or want to learn more about working with us?