overlay
Research Engineer, RSP Evaluations
Research & Engineering
hybrid: San Francisco, CA
Salary range $280,000 - $520,000
added Sat Oct 14, 2023
link-outApply to Anthropic
We are looking for Research Engineers to build “gold standard” evaluations for catastrophic risks, in order to understand what AI Safety Level (ASL) to assign to models. This will have major implications for the way we train, deploy, and secure our models, as detailed in our Responsible Scaling Policy (RSP).
The policy defines a series of capability thresholds – AI Safety Levels (ASLs) – that represent increasing risks – crossing an ASL threshold would trigger a commitment to more stringent safety, security, and operational measures, intended to handle the increased level of risk.
Research Engineers will join a team that is focused on National Security threats – evaluating whether models increase the likelihood or consequence of Chemical, Biological, Radiological and Nuclear risks (CBRN) risks, and whether models make sophisticated offensive cyber capabilities available to unsophisticated actors.
We are also looking for expressions of interest in the team focused on Autonomous Replication and Adaption (ARA) threats, and a potentially expanded set of evaluation workstreams in the future.

Responsibilities:

  • Research Engineers will be responsible for designing and running the evaluations needed to measure dangerous capabilities in models, and determine when we cross an ASL threshold.
  • You’ll work with world class experts in fields like biosecurity, autonomous replication, cybersecurity, and national security, and experiment with new evals, in order to measure how risky AI systems are.
  • Done well, this will inform decisions at the highest levels of the company

You may be a good fit if you:

  • Have an ML-focused background and engineering and research skills (e.g. experience in Python)
  • Are driven to find solutions to ambiguously scoped problems
  • Design and run experiments and iterate quickly to solve machine learning problems
  • Thrive in collaborative environment (we love pair programming!)
  • Have experience training, working with, and prompting models
  • For all workstreams, experience designing and building evaluations would be valuable, but is definitely not essential. For National Security threats workstreams, we will particularly value experience working on confidential or sensitive projects and demonstrated integrity, responsibility, and trustworthiness. We will also value domain specific knowledge, although it is not necessary. For ARA threats workstreams, we would value experience with language model agents, although this is not essential.

Sample Projects:

  • CBRN risks – working with external experts in the field of biosecurity to design clear and repeatable CBRN evaluations, based on a summary of dangerous biological capabilities. Using our post training infrastructure to prepare new generations of models for routine evaluations.
  • Cyber risks – working with external cyber experts to co-design a set of clear and repeatable cyber evaluations. This is likely to involve building custom environments or additions onto existing tooling and infrastructure, or locating specialized datasets.
  • ARA risks – building infrastructure and tooling for testing for these capabilities, and iterating with external ARA experts to scope possible tasks. This will involve building custom “testing environments” and new infrastructure.

Annual Salary (USD)

  • The expected salary range for this position is $280k - $520k
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.
Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.
US Benefits - The following benefits are for our US-based employees:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.
UK Benefits - The following benefits are for our UK-based employees:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Private health, dental, and vision insurance for you and your dependents.
- Pension contribution (matching 4% of your salary).
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Health cash plan.
- Life insurance and income protection.
- Daily lunches and snacks in our office.
* This compensation and benefits information is based on Anthropic’s good faith estimate for this position, in San Francisco, CA, as of the date of publication and may be modified in the future. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and for society as a whole.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.