Listen to this post

As many who are keeping track of generative AI developments are aware, the FTC recently announced that it is investigating OpenAI’s ChatGPT product. For the privacy practitioner this investigation is important given that among other things, the agency wants to understand better how OpenAI is using personal information, and if its privacy representations are sufficient.

Areas that the FTC has requested to learn more about in the civil investigative demand sent to OpenAI suggest that these will be things that the FTC looks at for any creating -or using- AI tools. Those areas include:

  • How OpenAI retains or uses personal information and what information it collects. Also asked are the methods it gives for individuals to opt-out and have their information deleted.
  • What data OpenAI uses to develop and train Large Language Models (“LLMs”), and how personal information is kept out of training data.
  • The policies and procedures in place that impact generation of statements about individuals. In particular the FTC has asked about mitigation strategies for statements that are possibly false, misleading, or disparaging.
  • Information about data security measures/policies and actual or suspected incidents.

Not surprisingly, this investigation aligns with the FTC’s previous guidance and concerns about potential AI consumer harms.

Putting it into Practice: The FTC’s investigation into OpenAI signals that the agency is looking closely at the privacy implications of this tool. Those either using tools like this or creating their own will want to keep prior advice from the FTC in mind.