Skip to main content

Security & privacy

OpenAI is committed to building trust in our organization and platform by protecting our customer data, models, and products.

OpenAI invests in security as we believe it is foundational to our mission. We safeguard computing efforts that advance artificial general intelligence and continuously prepare for emerging security threats.
Compliance and accreditation

OpenAI complies with GDPR and CCPA, and we can execute a Data Processing Agreement if you require. Our API has been evaluated by a third-party security auditor and is SOC 2 Type 2 compliant.

A row of logos for AICPA, CCPA, and GDPR.

  • External auditing

    The OpenAI API undergoes annual third-party penetration testing, which identifies security weaknesses before they can be exploited by malicious actors.

  • Customer requirements

    We help our customers meet regulatory, industry, and contractual requirements like HIPAA.

Reporting security issues

OpenAI invites security researchers, ethical hackers, and technology enthusiasts to report security issues via our Bug Bounty Program. The program offers safe harbor for good faith security testing and cash rewards for vulnerabilities based on their severity and impact.

An abstract, painterly image that interweaves bold primary colors in smooth shapes.
We are committed to protecting people’s privacy.
Our goal is to build helpful AI models

We want our AI models to learn about the world—not private individuals. We use training information to help our AI models, like ChatGPT, learn about language and how to understand and respond to it.

We do not actively seek out personal information to train our models, and we do not use public information on the internet to build profiles about people, advertise to or target them, or to sell user data.

Our models generate new words each time they are asked a question. They don’t store information in a database for recalling later or “copy and paste” training information when responding to questions.

We work to:

  • Reduce the amount personal information in our training datasets

  • Train models to reject requests for personal information of private individuals

  • Minimize the possibility that our models might generate responses that include the personal information of private individuals

Read more about how our models are developed(opens in a new window)

Ways to manage data
One of the most useful features of AI models is that they can improve over time. We continuously improve our models through research breakthroughs and exposure to real-world problems and data.

We understand users may not want their data used to improve our models and provide ways for them to manage their data:

More information
For more information on how we use and protect personal information, please read our
help article on data usage(opens in a new window) and Privacy policy.

FAQ

Data submitted through the OpenAI API is not used to train OpenAI models or improve OpenAI’s service offering. Data submitted through non-API consumer services ChatGPT or DALL·E may be used to improve our models.

Learn more about security at OpenAI