overlay
Privacy Program Lead
Legal
hybrid: San Francisco, CA
added Sat Oct 14, 2023
link-outApply to Anthropic
We are looking for an experienced Privacy Technical Program Manager to develop, implement, scale, and manage privacy-related policies and procedures to ensure that our AI products and services comply with privacy regulations and protect user data. Reporting into the Legal team, you will work cross-functionally with Legal, Operations, Security, Product, and Engineering to identify and address privacy requirements and risks, define technical solutions, policies and procedures, support internal education on privacy issues, and foster a culture of privacy and ethics across the organization.
The ideal candidate will have experience managing privacy programs for businesses operating in complex, rapidly changing, ambiguous areas of law and policy, balancing competing concerns, and translating broad principles into concrete actions and advice at the intersection of AI and data privacy. If you have a passion for privacy and the expertise to architect a world-class privacy program, we want to hear from you!
About Anthropic
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and for society as a whole. Our interdisciplinary team has experience across ML, physics, policy, business and product.

Responsibilities:

  • Collaborate with Legal, Operations, Product, Security, and Engineering to develop, implement, and monitor comprehensive privacy policies, procedures, and controls aligned with relevant regulations such as GDPR and CCPA.
  • Work with Privacy Engineering to develop and implement technical privacy requirements, controls, processes, tools, and infrastructure such as consent mechanisms, preference managers, and user controls.
  • Collaborate with cross-functional stakeholders to coordinate and support privacy design reviews, integration of privacy by design principles, and privacy threat modeling, risk assessments, and mitigations.
  • Monitor systems and data practices to identify privacy issues and ensure compliance.
  • Identify ways to automate privacy processes like data subject requests, reporting, and audits where possible and work with Privacy Engineering to implement automated practices.
  • Collaborate with product managers and engineering teams to balance privacy and product objectives.
  • Clearly document privacy policies, procedures, architectures, and controls.
  • Perform vendor due diligence and onboarding for privacy compliance.
  • Report on privacy metrics, issues, and progress to leadership.
  • Stay up-to-date on privacy best practices, tools, and emerging technologies.
  • Continuously improve privacy practices through training, research, and sharing lessons learned.

You may be a good fit if you:

  • You have a passion for developing AI that is both innovative and trustworthy.
  • You understand the opportunities of AI to benefit society as well as the risks and limitations that must be addressed.
  • You have experience operating in a fast-paced technology startup in which priorities shift rapidly and schedules “move to the left.”
  • You thrive in this dynamic environment and pride yourself on your adaptability and ability to pivot with speed and grace.
  • You understand how to achieve the right balance between the organization’s mission and goals, when to be flexible and when to draw a hard line.
  • You have a knack for identifying and implementing efficient processes and policies.
  • You thrive as a member of cross-functional teams building frontier technologies and want to develop a deep understanding of our technical teams and what we are building.
  • You enjoy wearing many hats in a fast-growing startup environment and are comfortable operating outside your areas of expertise and in uncharted legal territory.
  • You are a “doer” and are willing to roll up your sleeves to get things done.
  • You’re a team player who doesn’t hesitate to jump in to solve difficult problems.

Strong candidates may also:

  • Have 5+ years of experience in a privacy officer, privacy program management or related role.
  • Have deep knowledge of privacy-enhancing technologies and privacy engineering concepts.
  • Have hands-on experience building privacy features and controlsUnderstand relevant privacy laws and regulations.
  • Have experience with data mapping, DPIAs, and managing privacy programs.
  • Have excellent communication, collaboration, and stakeholder management skills.
  • Have the ability to translate complex privacy concepts for broad audiences.
  • Have a passion for advancing privacy and ethical data practices.
  • Familiarity with machine learning, generative models and products strongly preferred.
  • Be a motivated self-starter able to multi-task and juggle multiple priorities in a dynamic environment.
  • Have a growth mindset, with a passion for AI's potential to positively impact the world and realistic assessment of its risks and limitations.
  • Have a commitment to building trustworthy, privacy-forward, ethical AI systems.
Logistics
Location-based hybrid policy: Currently, we expect this position to be in office at least 75% of the time.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
US visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.
Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.
Benefits - Benefits we offer include:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.
* This compensation and benefits information is based on Anthropic’s good faith estimate for this position, in San Francisco, CA, as of the date of publication and may be modified in the future. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and for society as a whole.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.