WD

AI Ethics and Safety Researcher

WhatJobs Direct

Our client is at the forefront of AI innovation, seeking a visionary AI Ethics and Safety Researcher to join their pioneering team. This is a fully remote position focused on ensuring the responsible development and deployment of artificial intelligence technologies. The ideal candidate will possess a deep understanding of AI principles, machine learning, and the ethical considerations surrounding advanced AI systems. You will be instrumental in researching, defining, and implementing frameworks, policies, and technical solutions to mitigate AI risks, promote fairness, transparency, and accountability. This role demands a unique blend of technical expertise, philosophical inquiry, and a commitment to societal well-being.

Key Responsibilities

Conduct cutting-edge research into ethical AI principles, safety challenges, and societal impacts of AI systems. Develop and propose novel technical approaches and methodologies for AI alignment, safety, and fairness. Design and evaluate evaluation metrics and testing procedures for AI systems to ensure ethical compliance and risk mitigation. Collaborate with AI researchers, engineers, and product managers to integrate ethical considerations throughout the AI development lifecycle. Analyze potential biases in datasets and algorithms and develop strategies to address them. Contribute to the formulation of internal AI ethics guidelines, policies, and best practices. Stay abreast of the latest research, regulations, and industry trends in AI ethics and safety globally. Publish research findings in top-tier academic conferences and journals. Engage with external stakeholders, including policymakers, academics, and industry leaders, to advance the field of AI ethics. Develop educational materials and conduct training sessions on AI ethics and safety for internal teams. Advise on the responsible deployment of AI technologies across various applications. Identify and assess emerging risks associated with advanced AI capabilities. Work towards establishing robust safety protocols for AI systems.

Qualifications

Ph.D. or equivalent research experience in Computer Science, Artificial Intelligence, Machine Learning, Philosophy, or a related field with a focus on AI ethics and safety. Demonstrated expertise in AI/ML techniques, including deep learning, reinforcement learning, or natural language processing. Strong understanding of ethical frameworks, philosophical underpinnings of AI, and societal implications of technology. Proven ability to conduct independent research and publish high-quality work. Excellent analytical, critical thinking, and problem-solving skills. Exceptional written and verbal communication skills, with the ability to articulate complex technical and ethical concepts clearly. Experience with programming languages such as Python is highly desirable. Familiarity with AI safety research areas such as interpretability, robustness, and value alignment. Ability to work effectively in a collaborative, remote research environment. Passion for ensuring AI is developed and used for the benefit of humanity.

This is a groundbreaking opportunity to shape the future of responsible AI development from San Jose, California, US .

Job Type

Job Type
Full Time
Location
San Jose, CA

Share this job: