Openai

Trust & Safety Operations Analyst, Platform Abuse

Save to Kiter
What Openai is looking for in applicants

About the Team

Trust and Safety is at the foundation of OpenAI’s mission. The goal of the Applied AI team is to turn OpenAI’s technology into useful products. We see this as a path towards safe and broadly beneficial AGI, by gaining practical experience in deploying these technologies safely and making them easy to build with. 

Within the Applied AI team, the Trust and Safety team protects OpenAI’s technologies from abuse. We build tools and processes to detect, understand, and mitigate misuse at scale. We’re a small, focused team that cares deeply about safely enabling users to build useful things with our products.

In summer 2020, we introduced GPT-3 as the first product on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their product. The MIT Technology Review listed GPT-3 as one of its 10 Breakthrough Technologies of the past year (alongside mRNA vaccines!). In the summer of 2021, we launched Copilot, powered by our Codex, in partnership with GitHub.

About the Role

As an analyst on the Trust and Safety team, you will be responsible for discovering and mitigating abuse of OpenAI’s technologies. The Platform Abuse subteam specializes in detecting new threat vectors including new categories of harmful use cases and scaled abuse. This position consists of equal parts analysis work and capability improvement. This is an operations role based in our San Francisco office and will require participation in an on-call rotation and resolving urgent incidents outside of normal work hours. 

In this role, you will:

  • Detect, respond to, and escalate platform abuse incidents
  • Improve our detection and response processes
  • Collaborate with engineering, policy, and research teams to improve our tooling and understanding of abusive content 

You might thrive in this role if you:

  • Have a pragmatic approach to being on an operations and incident response team and can get in the weeds to get stuff done
  • Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams
  • Have experience in a technical analysis role or have experience with log analysis tools like Splunk/Humio
  • Bonus if you have experience with large language models and/or can use scripting languages (Python preferred) to write programs to solve problems

Note that this role involves grappling with questions of sensitive uses of OpenAI’s technology, including at times sexual, violent, or otherwise-disturbing material. This role will involve engaging with such content.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Benefits and Perks
  • Medical, dental, and vision insurance for you and your family
  • Mental health and wellness support
  • 401(k) plan with 4% matching 
  • Unlimited time off and 18+ company holidays per year 
  • Paid parental leave (20 weeks) and family-planning support
  • Annual learning & development stipend ($1,500 per year)
We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or other legally protected statuses. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via accommodation@openai.com.

Want some tips on how to get an interview at Openai?

What is Openai looking for?
If this role looks interesting to you, a great first step is to understand what excites you about the team, product or mission. Take your time thinking about this and then tell the team! Get in touch and communicate that passion.
What are interviews for Trust & Safety Operations Analyst like?
Interview processes vary by company, role and team. The best plan is to see what others have experienced and then plan accordingly.
How to land an interview at Trust & Safety Operations Analyst?
A great first step is organizing your path to an offer. Check out Kiter for tools to get started!