The FTC recently issued comments on how companies can use artificial intelligence tools without engaging in deceptive or unfair trade practices or running afoul of the Fair Credit Reporting Act. The FTC pointed to enforcement it has brought in this area, and recommended that companies keep in mind four key principles when using AI tools. While much of their advice draws on requirements for those that are subject to the Fair Credit Reporting Act (FCRA), there are lessons that may be useful for many.

The recommendations from the FTC include:

  • Transparency: the FTC encourages companies to tell people if they are making automated decisions using AI tools. Such disclosures may be mandated under laws like the FCRA, if the entity is automating decision-making about credit eligibility, for example. The FTC also reminds companies not to be deceptive or secretive about use of AI tools (pointing to its Ashley Madison decision, where the company was found to have deceptively used false profiles to encourage sign-ups). In order to be transparent, the FTC stressed that companies need to know “what data is used in [the company’s] model and how that data is used.” The FTC cautioned companies to think about how they would describe to consumers the AI decisions made about them.
  • Fairness: Here, the FTC reminded companies not to discriminate against protected classes by, for example, making decisions about credit based on zip codes, when those decisions have a “disparate impact” on groups protected under the Civil Rights Act. The FTC in its comments also instructed companies to ensure fairness by giving people the ability to both access and correct information, something required when the FCRA applies.
  • Accuracy: The FCRA has requirements for accuracy. The FTC reminded companies that even if they are not providing consumer reports, they should still be concerned about accuracy, as information they compile may be used for consumer reporting purposes, and as such the FCRA may apply. The FTC also pointed to the world of consumer lending when looking for “lessons” on the accuracy front, recommending that companies ensure that AI models work, are validated, and are retested to ensure that they work as the company had originally intended.
  • Accountability: The FTC stresses that companies should think about the impact their use of AI will have on consumers. As a resource, they direct companies to the FTC’s 2016 Big Data report. Questions to ask include whether or not the data set being used is appropriately representative and if the model takes into account potential biases. The FTC suggests companies consider using independent standards or outside experts to hold themselves accountable.

Putting it Into Practice: As automation tools become more common, these recommendations from the FTC can be helpful for companies to keep in mind. They signal expectations from the FTC, which are often enforced by the Commission after issuing signaling commentary like this to the industry.