FAR.AI is seeking Research Engineers to execute on AI safety research projects and red-teaming. Your focus will be on implementing machine learning algorithms, running experiments and analyzing results.
About Us
FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.
Since our founding in July 2022, we've grown quickly to 30+ staff, producing over 40 influential academic papers, and establishing leading AI Safety events. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News and MIT Technology Review.
We drive practical change through red-teaming with frontier model developers and government institutes. Most recently, we discovered major issues with Anthropic’s latest model the same day it was released, and worked with OpenAI to safeguard their latest model. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio; running FAR.Labs, an AI safety-focused co-working space in Berkeley housing 40 members; and supporting the community through targeted grants to technical researchers.
About FAR.Research
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects. Our model is to conduct initial investigations into a range of high-potential areas. We incubate the most promising directions through a combination of in-house research, field-building events, and targeted grants. Once the core research problems are solved, we work to scale them to a minimum viable prototype, demonstrating their validity to AI companies and governments to drive adoption.
Our current focus areas include:
- Mitigating AI deception: studying when lie detectors induce honesty or evasion, and developing model organisms for deception and sandbagging
- Evaluating and red-teaming: Conducting pre- and post-release adversarial evaluations of frontier models (e.g. Claude 4 Opus, ChatGPT Agent, GPT-5); developing novel attacks to support this work; and exploring new threat models (e.g. persuasion, tampering risks).
- Robustness: working to rigorously solve these security problems through building a science of security and robustness for AI, from demonstrating superhuman systems can be vulnerable through to scaling laws for robustness.
- Explainability: developing foundational techniques such as codebook features and AC/DC, and applying them to understand core safety problems like learned planning.
FAR.AI is one of the largest independent AI safety research institutes, and is rapidly growing with the goal of diversifying and deepening our research portfolio. For that reason, we’re seeking senior research engineers who can increase the technical depth of our work and allow us to answer research questions more definitively and at a larger scale.
About the Role
You will work in one of FAR.AI's research workstreams, developing scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).
While each of our projects is unique, your role will generally have:
- Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyze experimental results, and participate in the write-up of results.
- Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.
- Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.
- Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.
- Autonomy. You will be highly self-directed. To succeed in the role, you will likely need to spend part of your time studying machine learning and developing your high-level views on AI safety research.
About You
This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering and develop their research skills. Interested applicants may be transitioning from a software engineering background, or looking to grow an existing portfolio of machine learning research.
It is essential that you:
- Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.
- Have experience with at least one object-oriented programming language (preferably Python).
- Are results-oriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
- Common ML frameworks like PyTorch or TensorFlow.
- Natural language processing or reinforcement learning.
- Operating system internals and distributed systems.
- Publications or open-source software contributions.
- Basic linear algebra, calculus, vector probability, and statistics.
Logistics
If based in the USA, you will be an employee of FAR.AI, a 501(c)(3) research non-profit. Outside the USA, you will be an employee of an EoR organization on behalf of FAR.AI.
- Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.
- Hours: Full-time (40 hours/week).
- Compensation: $100,000-$190,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.
- Application process: A 72-minute programming assessment, a short screening call, two 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
If you have any questions about the role, please do get in touch at talent@far.ai.